id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
237505330 | pes2o/s2orc | v3-fos-license | SARS-COV-2 Mpro conformational changes induced by covalently bound ligands
Abstract SARS-CoV-2’s main protease (Mpro) interaction with ligands has been explored with a myriad of crystal structures, most of the monomers. Nonetheless, Mpro is known to be active as a dimer but the relevance of the dimerization in the ligand-induced conformational changes has not been fully elucidated. We systematically simulated different Mpro-ligand complexes aiming to study their conformational changes and interactions, through molecular dynamics (MD). We focused on covalently bound ligands (N1 and N3, ∼9 μs per system both monomers and dimers) and compared these trajectories against the apostructure. Our results suggest that the monomeric simulations led to an unrealistically flexible active site. In contrast, the Mpro dimer displayed a stable oxyanion-loop conformation along the trajectory. Also, ligand interactions with residues His41, Gly143, His163, Glu166 and Gln189 are postulated to impact the ligands' inhibitory activity significantly. In dimeric simulations, especially Gly143 and His163 have increased interaction frequencies. In conclusion, long-timescale MD is a more suitable tool for exploring in silico the activity of bioactive compounds that potentially inhibit the dimeric form of SARS-CoV-2 Mpro. Communicated by Ramaswamy H. Sarma
Introduction
A virus-caused illness later called Coronavirus disease 2019 by World Health Organization (WHO) -has been a worldwide concern since its first report in December 2019 (Wuhan, China), named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Wu et al., 2020). The disease caused by this new coronavirus was classified by the World Health Organization (WHO), in February 2020, as Coronavirus Disease 2019 (COVID-19). The outbreak was declared a pandemic in March 2020. By October 2020, $39 million cumulative cases were recorded globally, with over a million deaths Situation Reports, 2021).
Currently, patients with COVID-19 are treated with repurposed drugs, which effects are often controversial due to the adverse events or the lack of fully proven clinical verification of their therapeutic effects. Therefore, there is still a need for novel treatments and the investigation of potential drug targets remains the cornerstone when designing novel, safe and effective antiviral drugs (Penman et al., 2020).
The main protease of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2 M pro , herein referred as M pro for short) is a cysteine protease that plays a crucial role in the virus' life cycle since it releases replicases pp1a and pp1ab; these functional peptides are essential for replication and transcription of the virus (Pillaiyar et al., 2016). M pro is conserved in all coronaviruses and lacks a homolog with human proteins, increasing its attractiveness as a druggable target. Recent studies reported noncovalent M pro inhibitors with high antiviral activity (IC 50 ¼1 lM) and no cytotoxicity (Mendoza et al., 2020;Zhang et al., 2021). However, to the best of our knowledge, there are no SARS-CoV-2 M pro inhibitors available for clinical use.
M pro consists of a polypeptide chain with 306 amino acids structured in three domains (S-I, S-II and S-III) connected by a flexible loop ( Figure 1A). The S-I and S-II domains have a complementary antiparallel b-barrel fold relevant to the protease mechanism. The S-III domain contains five a-helices arranged in a broadly antiparallel globular cluster linked to the S-II domain through the flexible loop . SARS-CoV-2 M pro has a conserved catalytic dyad composed of Cys145 and His41 ( Figure S1). Further, the substrate-binding site contained on the surface between the S-I and S-II domains (in green and orange, respectively, in Figure 1A) includes oxyanion hole residues, relevant for the substrate binding (Su arez & D ıaz, 2020), and is covered by a loop (in yellow, Figure 1A). M pro has been co-crystallized with two lead compounds ( N1 and N3 Zhang et al., 2020) ). The main sub-pockets (color-coded in Figure 1C and D) are P1 (containing Phe140 and Glu166; in green), P1' (His41, Gly143, Ser144, Cys145 and His163; in pink), P2 (His164 and Met49; in cyan) and P3 (Pro168 and Gln189; in black).
The oxyanion-loop (residues 138 À 146) and catalytic residues are localized between subpockets P1 and P1', both of which are critical for substrate binding(Su arez & D ıaz, 2020). The oxyanion-loop stabilizes the partial negative charge in the P1 carbonyl group of the peptide substrate during the hydrolysis of the P1ÀP1 0 bond. The catalytically active conformation is stabilized by the interaction between the substrate and main-chain atoms of Gly143 and Cys145. In active conformations of M pro the Cys145 is in direct interaction with the His41.
SARS-CoV-2 M pro is active as a dimer. The main dimer interface includes S-III domains (in purple, Figure 1A), with the participation of the N-finger domain ( Figure 1B). The Nfinger is composed of the first seven residues of N-terminal (from the S-I domain, highlighted in pink in Figure 1B) and deletion of these residues in the 3CLpro homologue reduces the dimerization and, consequently, abolishes enzymatic activity (<1%) . Further, also in 3CLpro, it has been shown that Arg4Ala mutation reduces enzymatic activity .
Despite the relevance of the dimerization to the active site's conformation (Su arez & D ıaz, 2020), most of the current SARS-CoV-2 M pro crystal structures in the Protein Data Bank (PDB) are presented as monomers. Additionally, the few simulation-based studies available for M pro rely on monomeric structures with insufficient sampling due to their short timescale (50 ns À 2 ms) (Komatsu et al., 2020;Peterson, 2020). It is well known that long-timescale simulations are needed to ensure that the observed conformational changes are statistically relevant (Henzler-Wildman & Kern, 2007). For this reason, we have simulated covalently bound ligands (using N1 and N3 as model ligands, $9 ls per system, monomers and dimers) and compared these trajectories against the apostructure. We have analyzed the major protein movements and observed that the dimeric state is more stable than the monomeric state, especially if the interaction between N-finger with oxyanion-loop is concerned. Our investigation aims to clarify the relevance of the dimerization for the active conformation and ligand binding for M pro studies.
Results and discussion
SARS-CoV-2 M pro is more stable as a dimer than as a monomer Principal Component Analyses (PCA) of all the simulations pooled together indicated that most of the M pro 's large moments were captured in the two components, with the first component accounting for 82.8%, while the second component was responsible for 4.3% of the total motion. The first PC separates the ligand-bound monomers into two conformations (Figure 2A and B) and accounting for 24% and 31% of the analyzed trajectory for N3 and N1, respectively. However, similar behavior is not seen with apostructure simulations. In a more detailed analysis monomer simulations (ligand bound), PC's motion is characterized by the coordinated movement between the S-II and S-III domains ( Figure 2C), where S-III turns from its original conformation, potentially interfering with the dimerization interface (Su arez & D ıaz, 2020). Dimeric M pro simulations do not show this movement, as the dominant feature is a small variation in loop conformation (between S-II and S-III). This indicates that the dimeric protein is more stable than monomer, at least when it comes to the S-III movement. It is interesting to note that the SARS-CoV-2 M pro is biologically active as a homodimer Silvestrini et al., 2021;Su arez & D ıaz, 2020).
In addition, this is supported by our finding that monomeric simulations have higher protein backbone's flexibility (as represented by RMSF values, Figure 3A) when compared to their dimeric counterparts ( Figure 3B). This is especially manifested within the S-III domain region.
SARS-CoV-2 M pro dimer is stabilized by interactions in the S-III interface and N-finger
We also investigated whether the higher flexibility of S-III residues would interfere with the dimerization interface. M pro dimer X-ray structure is held together by interactions between several residues: (Ser1A-Phe140B, Ser1A-Glu166B, Ser1A-His172B, Arg4A-Glu290B, Arg4A-Lys137B, Ala7A-Val125B, Ser10A-Ser10B, Gly11A-Glu14B and Ser139A-Gln299B) (Su arez & D ıaz, 2020). However, during MD simulations most frequent interactions between subunits are detected as following: Arg4A-Glu290B (side-chain), Ala7A-Val125B, Ser10A-Ser10B and Gly11A-Glu14B ( Figure 4A and B, and Figure S4B-E). Especially S-II (Arg4A-Val125B) and S-III (Arg4A-Glu290B) interactions seems to contribute for the dimer stability. These findings are supported by Wang et al. (2020) that describes the dimerization as being driven by the interaction between Arg4A-Glu290B . The sequence conservation and the stability of interaction between Arg4-Glu290, we hypothesize the relevance of Arg4 for the dimerization/activity in SARS-Cov2 M pro , which could be validated by experimental mutations.
Interestingly, changes in frequency between dimeric ligand bound and apo structures are observed in Arg4-Lys137 and in the Ser1-Phe140 (113 ; considered a weak interaction)/Glu166 (92 ; a strong hydrogen bond)/His172 (102 ; medium) interactions, both of which belong to the N-finger region ( Figure S5). The strong hydrogen bond between Ser1A-Glu166A is also observed in the dimeric crystal structure (pdb: 7cb7).
The work of Chou et al. (2004) observed that interference with the Arg4-Glu290 interactions reduced the SARS-COV-1 dimerization and that, specifically Glu290Ala was enzymatically inactive whereas Arg4Ala was not (Chou et al., 2004). This supports the role of dimerization in activity for this enzyme's family.
Further, interactions between Ser1 and the residues Phe140, His172, and Glu166 were less frequent in the overall analyzed trajectory ( Figure 4B-E and Figure S4A and B). We postulate that Ser1 helps to shape the substrate pocket in the normal catalysis, but does not contribute to the inhibited state, as Glu166 and Phe140 are involved in the inhibitor stabilization (see below).
The complete deletion of the N-finger in SARS-CoV-1 M pro , reduces the extent of the dimerization and completely abolishes the enzymatic activity (<1%) . This was corroborated to also translate in SARS-Cov-2's M pro by simulations (Su arez & D ıaz, 2020), suggesting that the Nfinger conformation upon dimerization exerts a direct influence on the oxyanion-loop (namely Ser139) motions.
Our results also indicate a water-mediated interaction between Ser139A-Gln299B with high frequency (>75% of the analyzed trajectory) in all studied systems ( Figure 4F and Figure S4F). We can suggest that a water molecule could be structurally integrate this region, contributing to stabilize the intermolecular interactions, however this remains to be confirmed (Raschke, 2006).
Interestingly, Suarez et al. (Su arez & D ıaz, 2020) suggested that a direct hydrogen bond between Ser139A-Gln299B (12% of their simulated time), which is something we exclusively observed in our apostructure simulations (30% of the analyzed trajectory). It is noteworthy that Ser139A-Gln299B can also participate in the dimer's oxyanion stabilization ( Figure S6A and B), which links the dimerization stability with the conformation of the active site.
Additionally, free-energy calculations with N3 ligand in both monomeric and dimeric states suggested a lower energy interaction in the latter for the SARS-CoV-2 M pro , but not for the SARS-CoV M pro (Bello, 2020). The energy decomposition into the most relevant residues suggest that His41, Met49, Ser144, and Cys145 contributed significantly to the binding affinity (Bello, 2020). These results agree with the catalytic mechanism that shows the involvement of the main chain amides of Gly143, Ser144, and Cys145 in substrate cleavage in SARS-COV-1 (Chen et al., 2006). Accordingly, the functionality of the dimer is probably due to the interaction of the N-finger of each of the two monomers with Glu166 of the other monomer, which establishes the S-I domain, the pocket occupied by the substrate (Hsu et al., 2005).
Furthermore, in terms of the catalytic site, we propose that the interaction between Asn28-Cys145 and Gly143-Cys145 backbone atoms would stabilize the reactive conformation of Cys145 ( Figure S6G and H). The interaction Asn28-Cys145 had a high frequency (>80%), in all systems, whereas Cys145-Gly143 was 40% more frequent in the dimeric simulations ( Figure S6G and H). We also detected a smaller radius of the gyration variation for the Cys145 in dimeric simulations (22-23 Å) than for the monomeric (10-25 Å) forms ( Figure S6C and D), and a smaller overall fluctuation ( Figure S6E and F). This indicates that dimerization plays a role in stabilizing the active oxyanion-loop conformation.
Hydrogen bond interactions in the P1' region is influenced by dimerization
It is known that the oxyanion-loop is stabilized through a partial negative charge in the P1 carbonyl group of the peptide substrate during the hydrolysis of the P1 2 P1 0 bond (Rut et al., 2021). Given our observation that the dimerization stabilized the oxyanion-loop region, we further investigated its influence on the inhibitor binding to explain the differences in the ligands' inhibitory effects ( Figure 5). We observed that the ligand N3 has different hydrogen bond frequencies with amine group of the Gly143 for the monomer (<30% of the analyzed simulation time) and the dimer ($70%, Figure 5A and D). Further, the mean distances between amine group of the Gly143 and both ligands are smaller for dimeric simulations (N1 ¼ M < 3.7 Å and D < 1.8 Å; N3 ¼ M < 3.5 Å and D < 2.5 Å) ( Figure 5F).
Another apparently frequent interaction between M pro and inhibitors was detected for Glu166. The hydrogen bond between its side-chain and the pyrrolidine-2-one moieties from the ligand were stable through all the simulation ($98%, all systems, Figure 5A and D). Meanwhile, Glu166 backbone NH amide had a water-bridge with both N3 ($60% to monomer and $30% to dimer, Figure 5C and D) and N1 ($50% to monomer and $15% to dimer, Figure 5C and E). This water-mediated interaction seems to be more relevant for monomeric simulations, whereas Glu166 in Interestingly, the N1 carbonyl group of the pyrrolidine-2one ring displayed hydrogen bond interactions with His163 (P1) in dimeric simulations ( Figure 5A), whereas in N3 the monomeric states had water-mediated interactions. We hypothesize that the Phe140-His163 p-p interaction (with an average distance of $4.1 Å, Figure 5I) would be relevant to lock His163 in a hydrogen bond prone conformation. Moreover, Phe140 had more frequent hydrophobic contacts in the benzyl moiety of the N1 simulations (>40%) than in N3 (<5%) ( Figure 5B). This low frequency of hydrophobic contacts in both ligands is in agreement with a previous work, which suggested that the corresponding hydrophobic interactions were not crucial for the inhibition but more relevant in maintaining the His163 hydrogen bonding with the ligand (Ghosh et al., 2020;Zhang et al., 2020).
Biochemical data for N3 (IC 50 ¼ 9.0 lM ± 0.8 on enzymatic inhibition assay) Yang et al., 2005) and for N1 (IC 50 ¼ 0.6 lM ± 0.1) indicated similar binding mechanism, with N1 being a more potent inhibitor . We suggest that the differences in the His41 (P1') interaction frequency between inhibitors may explain their distinct inhibitory potencies. The inhibitor N1 exclusively showed an interaction between the hydroxyl ethene group and His41 ( Figure 5A and E), while N3's ethene group cannot form these interactions ( Figure 5D). Further, it was reported that His41 is considered a cold spot, among other homologue sequences, and missense changes led to lack of activity (Krishnamoorthy & Fakhro, 2021).
Importantly, in SARS-CoV2 Mpro, the key active site residues His41 (3 mutations), Phe140 (1 mutation), Cys145 (3 mutations), Glu166 (3 mutations), and His172 (1 mutation) showed low mutation frequencies (a total of 11 out of 525 mutations at the active site) The hydration site in the P3 site can be explored for increasing potency We also observed that Gln189 (P3 pocket), in both monomer and dimer, established a hydrogen bonding interaction with a the N3 carbonyl ($40%, Figure 5A, D, and E) and a weak water-bridge with N1 pyrrolidine ($20%, Figure 5C-E). As a result of these interactions, the loop Gln189 -Gln192 becomes less flexible in the presence of the inhibitor. This is in line with the previously reported M pro -inhibitor simulations suggesting that the loop connecting S-II À III would have a decrease in mobility upon inhibitor binding (Su arez & D ıaz, 2020).
WaterMap calculations (Abel et al., 2008) were performed to analyze the solvation impact within the M pro . Specifically, the protein's hydration sites surrounding the residues His41, Cys145, His163, Gln166, and Glu189 (P1, P1', and P3 sites) were calculated. The hydration sites in the P3 site (near the Pro168 and Gln189 residues) had the highest occupancy values (>80%) and free-energy (DG) values (3.15 kcal/mol). Specifically, hydration site 2 exhibited the highest occupancy values (0.87-0.89; Figure 6A and B). It has been suggested that P3 site is a conserved hydrophobic pocket and therefore moieties bulkier than 2-pyridone in the P3 region could substantially contribute to increasing the inhibitory potency. This is supported by our results, as displacing those high-energy water molecules would result in stronger binding. Previous studies reported that the highly flexible site of M pro (Bz owka et al., 2020) could be addressed by bulkier hydrophobic moieties, however pyrrolidines or amines group were shown to be poor groups to stabilize it . In quest for novel drugs to treat SARS-CoV-2 infection, the N1 has already proven functionality and reliable characteristics as a lead compound. Our work will hopefully help a bit this work, a dynamic understanding of the binding mode can be beneficial when developing subsequent strategies, such as scaffold hopping (B€ ohm et al., 2004) and molecular simplification (Pinacho Cris ostomo et al., 2006). Especially we believe that design larger/bulkier molecules to better occupy pockets such as P1 and P3 (Figure 7) would be beneficial. Finally, we would like to emphasize that although this study discusses the high stability of the dimeric SARS-CoV-2 M pro , by no means can we conclude that the protein would not undergo more extensive conformational changes in simulations with longer timescales (milliseconds).
Conclusion
We report microsecond MD simulations of the SARS-CoV-2 M pro , comparing covalently bound ligand-protein complexes with the apostructure, in both monomeric and dimeric configurations. Simulations with monomeric M pro have revealed a large conformational change, mainly in the S-III domain. According to the PCA analyses, the dimeric simulations pointed to small conformational changes, being stabilized by a network of hydrogen interactions in the dimerization interface. M pro is biologically active as a dimer and the results suggested that dimer is more stable than the monomer, with implications for the oxyanion loop and catalytic sites conformations.
In covalently bound systems, it was observed that the catalytic His41 and Glu166 were keys residues for stabilizing the M pro 's inhibited state. Additionally, we suggest that substituents bulkier than pyrrolidine could increase activity against SARS-CoV-2 M pro by occupying the hydrophobic P3 sub-pocket. We envision that this study can set a standard Figure 5. Overall interactions of the SARS-CoV-2 M pro with inhibitors' sub-pockets (P1, P1', P2 and P3). Frequencies of contacts for hydrogen bonds (A); hydrophobic bonds (B); and water bridge (C) of the M pro (monomer and dimer) with N1 and N3 ligands. Snapshot frames of interactions between the M pro dimeric form and the ligands N3(D); and N1(E). Distance between the M pro amino acid residues (monomer and dimer) and ligands N1 and N3 along the simulations for Gly143 (F); Glu166 (G); His163 (H); Phe140-His163(I). M pro residues are colored according to the types of atoms in the interacting amino acid residues (protein carbon, light gray; nitrogen, blue; oxygen, red; N3, sky-blue; N1, magenta).
for M pro MD simulations and benefit the search for novel bioactive compounds against SARS-CoV-2 M pro .
SARS-CoV-2 M pro amino acid conservation during evolution
The evolutionary conservation of SARS-CoV-2 M pro amino acid residues was calculated using the ConSurf server (Landau et al., 2005), comparing the M pro 's sequence against known homolog sequences available in PDB. This analysis predicts the conservation score of amino acid residues ranging from 1 to 9, with 1 indicating the least conserved and 9 the highest conserved and provides a structural visualization of it.
The selected PDB protein structures were prepared by adding hydrogen atoms and fixing missing side chains using the Protein Preparation Wizard (PrepWiz) (Madhavi Sastry et al., 2013), implemented in the Small Discovery Molecule Drug Discovery Suite 2019v.3 (Schr€ odinger LLC, New York, NY, USA). Sulfate ions and other co-crystallization molecules, such as glycerol (GOL) were removed. Within the catalytic site of M pro , His41 can act as a proton shuttle in the catalytic cycle (Pavlova et al., 2021). Accordingly, the His41 (atom NE), His163 (ND), His164 (ND) and His172 (ND) were protonated (apostructures). The His41 ionization and tautomerization states were chosen as previously discussed in (Paasche et al., 2014).
The chosen protein crystals were analyzed according to biological assembly state by using the PISA protein website (Krissinel & Henrick, 2005) (https://www.ebi.ac.uk/pdbe/pisa/), to generate their dimeric state. Dimers for the different systems were minimized using Prime (Jacobson et al., 2004), with default options.
Molecular dynamics simulations
Prepared SARS-CoV-2 M pro structures were simulated as apostructures (without ligands) and covalently bound to ligands (N1 and N3). Molecular Dynamics (MD) simulations were carried out by using the Desmond engine (Bowers et al., , 2007 with the OPLS3e force-field (Harder et al., 2016) according to a previously described protocol (Ferreira et al., 2019). OPLS3e accomplishes this by incorporating a broad range of chemical moieties with greater and combining them on-the-fly to generate parameterization, followed by the assignment of partial charges (Roos et al., 2019). In (2) spheres in purple. Occupancy: number of water-oxygen atoms that occupy a given hydration site during a short-time simulation (5 ns); DG: free-energy value; DH: enthalpic energy; -T: temperature (K) and DS: entropic energy. short, the system encompassed the protein-ligand/cofactor complex, a predefined water model (TIP3P (Jorgensen et al., 1983)) as a solvent and counterions (Na þ or Cladjusted to neutralize the overall system charge). The entire system was treated in a cubic box with periodic boundary conditions (PBC), specifying the shape and the size of the box as 13 Å distance from the box edges to any atom of the protein. Short-range coulombic interactions were calculated using 1 fs time steps and 9.0 Å cut-off value, whereas long-range coulombic interactions were estimated using the Smooth Particle Mesh Ewald (PME) method (Darden et al., 1993). Each system was subjected to at least 3 ls simulations (three replicas).
Root mean square deviation (RMSD) values of the protein backbone were used to monitor simulation equilibration and protein folding changes ( Figures S2 and S3A, C, and E). The fluctuation (RMSF) by residues was calculated using the initial MD frame as a reference, and compared between ligandbound and apostructure simulations ( Figures S2 and S3B, D, and F). All the trajectory and interaction data are available on the Zenodo repository (code: 10.5281/zenodo.3980660).
Atomic interactions and distances were determined using the Simulation Event Analysis pipeline as implemented in Maestro 2019v.4 (Schr€ odinger LCC). The criteria for proteinligand H-bond are 2.5 A˚distance between the donor and acceptor atoms (D -HÁÁÁA); !120 angle between the donor-hydrogen-acceptor atoms (D -HÁÁÁA); and !90 angle between the hydrogen-acceptor-bonded atoms (HÁÁÁA -X). Corresponding requirements for protein-water and water-ligand H-bonds are 2.8 A˚(D-HÁÁÁA); !110 (D-HÁÁÁA); and !90 (HÁÁÁA-X). Non-specific hydrophobic interactions are defined by the presence of a hydrophobic side chain within 3.6 A˚of the ligand's aromatic or aliphatic carbons. p-p interactions is recorded when two aromatic groups are stacked face-to-face or face-to-edge and within 4.5 A˚of distance. MD trajectories were visualized, and figures produced by PyMol v.2.4 (Schr€ odinger LCC, New York, NY, USA).
Principal Component Analysis (PCA) was used to study the main features of monomeric and dimeric backbone movements. The backbone atoms of chain A and chain B were extracted and aligned using scripts (trj_selection_dl.py and trj_align.py) from Schrodinger package 2019v.4. Individual simulations from all runs were merged using the trj_merge.py script into a final trajectory and CMS file, which was further used to generate the principal components. The actual PCA was done by using the trj_essential_dynamics.py script. PCA graphics were generated using the python script, available in the GitHub repository (code: https://github.com/ gmf12/pcaanalysismrpo.git). All commands were generated using JuPyter (Matplotlib, Seaborn, Numpy, and Pandas).
WaterMap calculations
WaterMap calculations were used to analyze the impact of solvation on the active site region of SARS-CoV-2 M pro . Briefly, short-run MD simulations (5 ns) of the M pro active site (apostructure) were performed using the Desmond molecular dynamic engine with the OPLS3e force field. The binding site was defined to include all protein residues within a 5 Å distance of any atoms in the catalytic dyad (His41 and Cys145) and supporting residues (His163, Gln189 and Glu166). Protein structure was restrained throughout the simulation. Water molecules were clustered into distinct hydration sites. Enthalpy values of the hydration sites were obtained by averaging over the non-bonded interaction for each water molecule in the cluster. Entropy values were calculated using numerical integration of the local expansion of the entropy in terms of spatial and orientational correlation functions (Young et al., 2007). | 2021-09-15T06:17:58.175Z | 2021-09-13T00:00:00.000 | {
"year": 2021,
"sha1": "e302f1e9987c78a349e35b72f324553a46c41280",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07391102.2021.1970626?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "93fd7320724d97f5c6fcb16c524532824aa0ff0f",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1679268 | pes2o/s2orc | v3-fos-license | Dynamic critical exponents of the Ising model with multispin interactions
We revisit the short-time dynamics of 2D Ising model with three spin interactions in one direction and estimate the critical exponents $z,$ $\theta,$ $\beta$ and $\nu$. Taking properly into account the symmetry of the Hamiltonian we obtain results completely different from those obtained by Wang et al.. For the dynamic exponent $z$ our result coincides with that of the 4-state Potts model in two dimensions. In addition, results for the static exponents $\nu$ and $\beta$ agree with previous estimates obtained from finite size scaling combined with conformal invariance. Finally, for the new dynamic exponent $\theta$ we find a negative and close to zero value, a result also expected for the 4-state Potts model according to Okano et al.
Since the work by Janssen et al [1] and Huse [2] pointing out the existence of another universal stage at an early time critical dynamic, several statistical models have been investigated to confirm the analytical predictions about the "critical initial slip" and to enlarge the knowledge of critical phenomena [3], [4], [5], [6], [7], [8]. The investigation of the universal behavior in short-time dynamics avoids the critical slowing down effects of the equilibrium and provides an alternate way [9] to calculate the new exponent θ which governs the behavior of the magnetization, the dynamic critical exponent z as well as the static exponents β and ν.
In this letter we adopt this approach to study the two-dimensional (2D) Ising model with three-spin interactions in one direction and calculate its set of exponents. Motivation came from the fact that in a recent paper by Wang et al [10] estimates obtained were in complete disagreement with pertinent results.
Here we show that, when the symmetry of the model is taken properly into account, good agreement is obtained with expected results.
The Hamiltonian of the 2D Ising model with three spin interactions (m = 3) in one direction is [11] −βH = <i,j> where S i,j = ±1 is the Ising spin variable. The model is known to be self-dual [12], its critical line being for all m. For the particular isotropic case (K x = K y ) the critical coupling is which is the same of the standard 2D Ising model. Symmetry analysis and ground state degeneracy considerations suggest that the model is in the same universality class as the q-state Potts model, whenever q = 2 m−1 [12], [13]. This result is supported by finite-size scaling studies [14], [12], [15], weak and strong coupling expansions [16], conformal invariance [17], [18], [19], standard Monte Carlo simulations [20], [21] [13] and mapping of the m = 3 model in the extreme anisotropic limit of the 4-state Potts model [22]. In most of those papers the argument to include the Ising model with three-spin interaction and the 4-state Potts model in the same universality class is based on the value of the exponents ν and α(≈ 2/3) [23]. All of them respect the symmetry of the Hamiltonian. Very little is known about the exponents β [21] and z [10].
The ground state for the general m-spin interaction is 2 m−1 degenerated [14], which implies that the 2D Ising model with m = 3 spin interaction is fourfold degenerated. The relevant symmetry of this model is semi-global [24] and the Hamiltonian is symmetric under the reversal of all the spins in any two sublattices, which leads to the existence of three independent interpenetrating sublattices in the system. At T = 0, the possible states consist of repetitions of the patterns +++, +−−, − +−, −−+ in the horizontal direction, copied along the lines. According to preceding statement, it is important to take lattice sizes that are multiple of three -in order to respect the symmetry of the Hamiltonianand to work with the appropriate order parameter -the sublattice magnetization -in order to avoid the effect of staggered magnetization. We suspected that Wang et al [10] didn't consider the m-spin symmetry in their simulations neither worked with the magnetization of sublattice, since they just presented results for square lattices which do not obey that previous condition (multiple of three). In this sense, it seemed to us relevant to repeat simulations, paying attention to the above mentioned points, to check the apparent failure of the short-time approach in this case.
We began by repeating the analysis made by Wang et al [10] for the Binder cummulant where < > means average on samples, t is the time, S i,j is the magnetization at time t and L the size of the square lattice. They argue that this expression obeys the power law form: when the dynamical process starts from an ordered state (m 0 = 1), which is a fixed point under renormalization group transformation. Fig. 1 shows explicitly the different results obtained when we use the magnetization of the sublattice. The average is taken over 50000 independent initial configurations and the error bars (smaller than the size of the points) are obtained by repeating five times each simulation. When the simulation is performed without the sublattice considerations the slope of the curve agrees with that presented by Wang et al and confirmed our suspicions. From our point of view [25], though, this cummulant should obey the power law form (5) only when different initial conditions were used in the study of the magnetization and its second moment. Scaling arguments [3] assert that the second moment of magnetization behaves as only when the samples are taken with zero initial magnetization (m 0 = 0). On the other hand, short-time scaling behavior implies for samples starting from the ordered state (m 0 = 1) [26]. Thus, performing two different simulations under those mentioned conditions, we obtain the time evolution of the ratio M 2 (t) / M (t) 2 which furnishes the exponent d/z in a log-log plot (Fig. 2). From the slope of that curve we estimate z = 2.380 ± 0.004.
To confirm our result we used two other approaches: the generalized fourth order Binder's cumulant [3] and the parameters Q and R introduced by de Oliveira [27]. In both cases, the collapse of the curves for different lattice sizes, at critical temperature, are used to determine the dynamical critical exponent z from short-time simulations. The Binder's cummulant, satisfies, at T = T c the scaling relation where b = L/L ′ since U 4 scales as L 0 . This technique has proved to be useful in determining the exponent z and was applied to the 2D and 3D Ising models [3], [28], the 3-state Potts model [7], the majority vote model [29] and cellular automata [30]. The initial magnetization of samples, in this case, is zero as well as the correlation length. The results are good enough. Error bars, however, are bigger than those obtained by damage spreading technique [31]. In Fig. 3, we show the collapse of the cumulant U 4 for lattice pairs (L, 2L) with z = 2.3. In fact, the range of z for which the collapse is still observed is given by z = 2.3 ± 0.1. This result supports our previous estimate for z (2.383 ± 0.004) and can be related to the 4-state Potts model exponent [32]. In order to stress the importance of considering the symmetry of Hamiltonian, we exhibit in Fig. 4 the deformation of the Binder cumulant when there is no sharp preparation of the initial magnetization on the sublattices and the magnetization evolution is calculated without restrictions. When we use the parameters Q(t, L) = sign 1 L 2 M (t) and R(t, L) = [27] and scaling relations for we obtain the collapse among curves (see Fig. 5 and Fig. 6) for different lattices when time is scaled with z = 2.3. Samples in these cases were initialized with all spins up (m 0 = 1).
In order to calculate the exponent ν, we studied the derivative of the magnetization, which presents the following scaling form: Fig. 7 shows the power law behavior of the ∂ τ ln M (t, τ ) when ∆τ = 0.0002. The measured slope of the curve gives 1/νz = 0.624 ± 0.005. Thus, taking z = 2.383 ± 0.004 we find ν = 0.67 ± 0.01, to be compared with the conjectured value ν = 2/3.
Since the values of the exponents ν and z are already known, the exponent β of the magnetization can be obtained from the power law increase of the second moment of magnetization, Eq.(6). Fig. 8 presents, in a double-log scale, the polynomial behavior of M (2) (t) . From the slope of these lines we estimate β = 0.11 ± 0.02. This result is in agreement with the expected value β = 0.125 [24], [34], [35].
In order to extend the picture of universality we investigate the exponent θ by a recent technique proposed by Tomé and de Oliveira [36]. In their paper they show that the exponent θ can also be independently calculated in despite of the sharp preparation of the samples, since the time correlation of the total magnetization in samples with a random initial configuration also exhibits polynomial behavior This procedure avoids the use of an initial state with a nonzero (but small) magnetization as well as the numerical extrapolation m 0 → 0 to calculate the dynamic exponent θ. Fig. 9 shows, in a log-log scale, the M (t)M (0) behavior for different lattice sizes. The slope of those curves give us θ = −0.03 ± 0.01 which is compatible with the conjecture by Okano et al for the 4-state Potts model [5].
In summary, we have obtained static and dynamic critical exponents for the Ising model with multispin interactions using short-time Monte Carlo simulations. Our results show that this model and the 4-state Potts one share the same set of critical exponents even at dynamic level. When compared to the paper by Whang et al this letter shows the importance of taking properly into account the symmetry of the Hamiltonian to deal with magnetization and boundary conditions. We stress that the present result for the exponent ν is better than previous estimates obtained by finite size scaling and Monte Carlo approach [24]. | 2014-10-01T00:00:00.000Z | 2001-03-07T00:00:00.000 | {
"year": 2001,
"sha1": "594e3a1f9cad95c9394edc869ee50e71c2f2457e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0103165",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "594e3a1f9cad95c9394edc869ee50e71c2f2457e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119247881 | pes2o/s2orc | v3-fos-license | The soft and hard X-rays thermal emission from star cluster winds with a supernova explosion
Massive young star clusters contain dozens or hundreds of massive stars that inject mechanical energy in the form of winds and supernova explosions, producing an outflow which expands into their surrounding medium, shocking it and forming structures called superbubbles. The regions of shocked material can have temperatures in excess of 10$^6$ K, and emit mainly in thermal X-rays (soft and hard). This X-ray emission is strongly affected by the action of thermal conduction, as well as by the metallicity of the material injected by the massive stars. We present three-dimensional numerical simulations exploring these two effects, metallicity of the stellar winds and supernova explosions, as well as thermal conduction.
INTRODUCTION
It is well known that masive O-B type stars inject a considerable amount of mechanical energy into the interstellar medium (ISM), in form of stellar winds or supernova (SN) explosions. The energy input by these events is sufficient to drive strong shocks that expand into the ISM generating a structure called bubble.
The model proposed by Weaver et al. (1977) and later expanded by Chu & Mac Low (1990) and Chu et al. (1995), is considered the standard model of bubbles driven by stellar winds. It considers the injection of mechanical energy to the ISM from stellar winds that results in the formation of a bubble. This bubble is surrounded by a cool shell of ISM material that has been swept by the expanding shock front. The shocked (and thereby heated and compressed) material in the interior of the bubble emits considerably in X-rays, whereas the outer, cooler shell emits at optical wavelenghts.
The original Weaver et al. (1977) model considers a single stellar wind source. Some time later, in order to explain what is now known as superbubbles, the model was extended to include multiple wind sources (see Chu & Mac Low 1990;Chu et al. 1995;Cantó et al. 2000;Silich et al. 2004) .
The simplest model of superbubble formation is as follows. Consider a cluster with N stars each having different antonio.castellanos@nucleares.unam.mx mass-loss rateṀw,i and a wind velocity vw,i. Since the stars inject mechanical energy in the form of stellar winds, the total mechanical luminosity is given by At first, the stellar winds collide with each other and with the enviromnent inside the cluster radius. Thus the space between the stars is filled with hot shocked material from the winds. This happens until a stationary flow is established, giving rise to a common cluster wind that forms a supershell. As this supershell expands through the surrounding ISM it creates a superbubble structure with the following structure (Weaver et al. 1977;Rodríguez-González et al. 2011;Velázquez et al. 2013): (i) The innermost region located near the stars (where their winds collide) produces thermal hard X-ray emission (if the stellar winds have terminal velocities larger than 1000 km s −1 ), and driving the expansion of the bubble through a pressure difference between the hot and dense interior and the colder and less dense environment.
(ii) After the individual winds from the stars coalesce into a cluster wind, it expands freely from the cluster radius outwards. In this zone X-ray emission is important only close to the cluster radius, and it consists mostly of soft X-rays.
(iii) Behind the main shock pushing into the ISM a re-verse shock is formed. The reverse shock encounters the freely expanding wind and compress and heats it to soft X-ray emitting temperatures. The region filled by shocked wind is quite extended and dominates the emission in Xrays, particularly in the soft energy bands.
(iv) The outermost region of the supperbubble consists of a shell of shocked ISM that has been swept up by the main shock. Beyond this zone there is only unperturbed ISM material.
The original wind blown bubble (WWB) model proposed by Weaver et al. (1977) overpredicts the X-ray luminosity. One reason is that this model includes thermal conduction and in consequence produces a denser interior that in the case without it, which in turn increases the X-ray luminosity. Furthermore, the wind blown bubble models do not take into account the radiative losses within the cluster radius, and this can have a significant impact on the luminosity (see Rodríguez González et al. 2011). Recent examples of this are the works of Dunne et al. (2003) and Reyes-Iturbide et al. (2009), which predict X-ray luminosities that exceed that of the observations by about one order of magnitude.
On the other hand, there are others models that predict an X-ray emission that underestimates the observed values (for instance, see the work of Harper-Clark & Murray 2009; Rogers & Pittard 2014, in which only the cluster wind region is considered), a problem for which different solutions have been explored. Chu & Mac Low (1990) proposed that in order to increase the luminosity of X-rays (so as to match the observations) one should consider shock waves produced by the explosion of supernovae inside the star cluster. Stevens & Hartwell (2003) presented models where the luminosity of soft X-rays is obtained as a function of the mass loss rate, the cluster radius and the wind terminal velocity. They do not take into account mass loading, but they consider it can be relevant for the study of soft X-rays in this type of massive clusters. The work of Silich et al. (2001) deals with the effect on the X-ray emission of the high metal content injected by the massive stellar winds and the SN remnants. Rodríguez-González et al. (2011) showed that supernovae occurring near the centre of the cluster are not capable of reproducing (completely) the luminosity observed in Xrays, and neither do they help explain the kinematics of the shell (without consider thermal conduction). They instead showed that off-centre SN explosions (for N70 and N185, see also Reyes-Iturbide et al. 2014) could help explain the two or three orders of magnitude difference between the luminosity observed and the standard model predictions. However, in these models the X-ray luminosity agrees with the observed value for only ∼ 10 000 years, making the probability of observing them in this regime rather low. On the other hand, Velázquez et al. (2013) presented models of the M 17 superbubble where they considered the contribution of the gas of the parental cloud in the evolution. They showed that the mass loading from the parental cloud can help increase the luminosity of soft X-rays by up to an order of magnitude. In such work neither the metallicity nor thermal conduction were considered.
The Large Magellanic Cloud (LMC) is filled with superbubbles with important soft X-rays emission. Some of these superbubbles (for instance, DEM L50 and DEM L152, see Jaskot et al. 2011) show evidence for off-centre supernova events which seem to interact with the external shell pushed by the stellar winds. In the observations made by Jaskot et al. (2011) the gas of the supernova remnant is still seen. This remnant is located close to the superbubble edge. These objects have luminosities up to an order of magnitude higher than those predicted by the model of Weaver et al. (1977).
Moreover, the numerical models that appear in Jaskot et al. (2011) produce luminosities that are two orders of magnitude lower than the observations (∼ 10 36 erg s −1 ). Therefore the authors explored the effect of the metallicity and of mass loading by clouds to bridge the luminosity deficit in soft X-rays. They calculated the mass of metal enriched material injected by the supernova explosions (Maeder 1992;Oey 1995;Silich et al. 2001;Añorve-Zeferino et al. 2009) and found that metallicities from 3 to 10 times solar can be achieved, and using the equations of Silich et al. (2001), they concluded that the effect is not sufficient to account for the differences. They conclude that the main mechanism that can explain such an important enhancement of the total X-rays luminosity is mass loading.
Recently, Rogers & Pittard (2014) presented a study of the soft X-ray emission during the various evolutionary stages of massive stars embedded in a dense giant molecular cloud (GMC), going through the red supergiant and Wolf-Rayet stages up to the supernova phase. They showed that the inclusion of the GMC results in a short lived attenuation of the X-ray emission of the cluster, during the time before an important fraction of the material is carried away from the wind interaction region. After this occurs, the luminosity remains practically constant.
The X-ray emission of a star changes substantially as it goes through distinct evolutionary stages. For instance, the X-ray luminosity drops abruptly during the red giant phase and increases substantially once in the Wolf-Rayet phase. Rogers & Pittard (2014) show that, in spite of the differences between their models and some observations, their results agree reasonably with other observations, such as the case of M17 and the Rosette Nebula. They found that the emission produced by their model during the early winddominated phase is smaller compared to the prediction from the standard model (Weaver et al. 1977;Chu & Mac Low 1990), but larger than the emission expected in models that only consider the emission at the interaction region of the winds of massive stars (the cluster wind). Finally, for stars in the main sequence, they found luminosities two or three orders of magnitude above those predicted by the standard model, lasting for more than 4.5 kyr.
In this work, we present a series of numerical models exploring the effects of supernova explosions, metallicty and heat conduction in the thermal soft and hard X-rays luminosity of a massive star cluster. The paper is organised as follows: in Section 2 we present the numerical setup of our models, describe the implementation of the thermal conduction and the metallicity in the gas dynamics equations, and in Section 3 we show the resulting synthetic emission in the soft and hard X-ray bands as well as a brief discussion of our results . In Section 4 we have made some comparisons of our numerical models with four interesting observed bubbles. Finally, a summary is given in Section 5.
THE NUMERICAL SIMULATIONS
With the purpose of exploring the effects of the interaction and influence of the SN explosions and metallicity, as well as thermal conduction in the cluster stellar winds, we performed a series of numerical simulations, and estimated the soft and hard X-ray emission that would be produced.
We used the huacho code (see Esquivel et al. 2009) to perform all numerical simulations. The code solves the hydrodynamic equations (1-3) on a three dimensional uniform Cartesian mesh, using a second order finite volume method with HLLC fluxes (Toro et al. 1994) and a piecewise linear reconstruction of the variables at the cell interfaces with a minmod slope limiter. The code also includes radiative losses and isotropic thermal conduction: where ρ, u, T , P and E are the mass density, velocity, temperature, thermal pressure and energy density, respectively, I identity matrix, γ is the heat capacity ratio, L rad (Z, T ) is the energy loss rate, and q is the heat flux due to electron conduction (see sub-section 2.2). The system is closed with an ideal gas law given by E = ρ|u| 2 /2 + P/(γ − 1). To find the energy loss rate, we use a tabulated cooling function from the freely available chianti database (Dere et al. 1997;Landi et al. 1996). As we show in Figure 1, we have constructed a cooling function with a range of metallicities of 0.1-30 Z . The computational domain is a cube of 140 pc on a side, discretised by 256 3 cells in a uniform grid, yielding a resolution of 0.5469 pc 1 . From the number of massive stars, one can estimate the mass of the star cluster (∼ 3500 M using Starburst99, Leitherer & Heckman (1995). In our model we did not consider the total mass of the paternal cloud, but we selected the size of the simulation box so that it contains a typical superbubble of radius (∼50 pc, i.e. DEM L 50 and DEM L 152).
The simulations include 15 stellar wind sources placed randomly within the cluster radius (Rc = 10 pc, and the distribution is the same for all the simulations). The stellar winds are imposed in spherical regions of radius Rw = 6.03× 10 18 cm (1.95 pc), corresponding to 5 pixels of the grid, and have a temperature Tw = 10 5 K. All the stars have the same mass-loss rate,Ṁw = 10 −6 M yr −1 , and wind velocity of vw = 1500 km s −1 . We turned on the stellar winds at the beginning of the simulation. The rest of the computational domain was initially filled by a homogeneous enviromnent with temperature T0 = 10 4 K and density n0 = 2 cm −3 .
We impose a SN inside the bubble formed by the winds at four different times: 2, 3, 5 and 7.5 × 10 5 yr (each corresponding to a different model). These times were chosen to control the distance from the site of the supernova to the supperbuble shell. Since we do not follow the evolution of the star cluster that produces the shell (as Rogers & Pittard 2014), these times are related to the superbubble dynamical age that would be observed and not with the star cluster age. The superbubble dynamical age is smaller than that of the stars because it does not include the time needed to clear up the material between the stars and form a common bubble. While in most of the models the supernova is placed at the centre of the star cluster, we have included two off-centre models: one in which the SN is placed 5 pc from the centre, and one where it is near the edge of the bubble (10 pc from the centre). The supernova explosion was imposed by injecting a total energy of 1×10 51 erg and 2 M of mass in a region with a radius of 2 pc. Half of this energy was injected as kinetic energy (with velocity following an increasing linear profile with radius, and constant density and temperature inside the imposition region), and the rest is thermal energy (Toledo-Roy et al. 2014).
To explore the effect of metallicity we have performed some runs with a homogeneous metallicity for all the three components (the ISM, the stellar winds and the SN) and some models with a different metallicity for each of these components. In the homogeneous metallicity models we have used a metallicity of 0.3 Z . For the variable metallicity models, following Silich et al. 2001, we use ZISM = 0.3 Z for the ISM, Z wind = 3.0 Z for the mass injected in the form of winds, and ZSN = 10 Z for the SN ejecta. We have also included thermal conduction in two of the models.
The parameters of the simulations are listed in Table 1. As can be seen from the table, we named the models to reflect the parameters used: the number after 'sr' corresponds to the location of the SN in pc; it is followed by 'tsn' and another number to indicate the time of the SN detonation since the winds sources were turned on (in units of 10 5 yr); next there is the letter 'z' followed by the number 0.3 for the uniform metallicity models or the letter 'v' for the variable metallicity runs; for the models with thermal conduction a letter 'C' is appended at the end of the model name.
Adding the effect of metallicity to the cooling
The cooling in the code is added as a source term after updating the hydrodynamic variables. At the end of each timestep, we estimate the cooling by interpolating a tabulated cooling curve which is, for a given metallicity, a function of the temperature. The energy loss is then subtracted to the internal energy of each cell at every timestep. For the runs with a uniform metallicity this is a simple linear interpolation (in temperature) of a single table that is generated by the chianti database. For the runs with varying metallicity we created a series of tables for metallicities in the range of 0.1-30 Z ; these are plotted in Figure 1. Along with the gas-dynamic equations (eqs. 1-3) we consider the metallicity Z as a passive scalar by including an extra equation of the form Using the metallicity value at each cell we do a bi-linear interpolation (with metallicity spaced linearly and temperature logarithmically) to estimate the cooling to be applied there. For the ISM the metallicity is set to 0.3 Z at the start of the simulation. For the winds and supernovae the gas is injected into the simulation either with Z wind = ZSN = 0.3 Z (for the homogeneous models) or Z wind = 3 Z and ZSN = 10 Z for the inhomogeneous models.
The average metallicity in each region can be calculated (using Silich et al. 2001) as: where, Mz,ej and Mz,ism are the masses of the metallic ejecta (by winds and/or SN) and swept up interstellar gas, respectively, while Mej and Mism are the total masses of the ejected and swept up interstellar gas. In general, the most metal enriched regions are found behind the contact discontinuity that separates the main and reverse shocks. Even though some mixing occurs at the interface (mainly due to hydrodynamical instabilities and/or turbulence), since the swept up ISM mass is larger than the ejected mass the metallicity of the shell remains close to that of the ISM.
Thermal conduction
In order to include the effect of thermal conduction by free electrons in our numerical simulation, we add a heat flux term (∇ · q) in the right hand side of the energy equation (3).
The heat conduction due to collisions with free electrons in a plasma is given by the classical Spitzer (1962) law: where k is the thermal conductivity given by where, for a fully ionized hydrogen plasma β ≈ 6 × 10 −7 erg s −1 K cm −1 (see Spitzer 1962). The result relies on the assumption that the mean free path is small compared to the scale-length of temperature variations (λ T /|∇T |). When the mean free path of the electrons is comparable or larger than temperature scale-length the heat flux saturates. In this regime the heat flux can be estimated by the local sound speed (cs) and pressure (P ), as described by Cowie & McKee (1997): where φs is a factor of order unity (we have used φs = 1.1). At every timestep we compute the heat fluxes in the classical and the saturated regimes, keep the smaller one and introduce its divergence as a source term to the energy equation. We have to mention that the thermal conduction timescale is smaller than the hydrodynamic one determined by the standard CFL condition. For this reason we apply a sub-stepping method to include the source term (we take on the order of 100 sub-steps to integrate the source term for each hydrodynamical step).
X-ray emission coefficients
We take the output from the hydrodynamical simulations to estimate the X-ray luminosity in all the models. We consider that the emission coefficient in the low-density regime is jν (n, Z, T ) = n 2 e χ(Z, T ), where ne is the electron density, and χ(Z, T ) is a function of the the temperature (T ) and the metallicity (Z). For a given metallicity, the function χ can be computed and integrated over an energy band using the chianti atomic database and its associated IDL software (Dere et al. 1997;Landi et al. 1996). We have computed χ(Z, T ) for various metallicities (Z = 0.1, 0.3, 1, 3, and 10 Z ), using the ionisation equilibrium model by Mazzota et al. (1998), over a range of temperatures from 10 4 to 10 9 K. The emission coefficients were integrated over two energy bands: soft X-rays (0.1-2 keV), and hard X-rays (2-10 keV). The result is a two dimensional table of coefficients that is function of temperature and metallicity. Figure 2 show the thermal soft (red lines) and hard (blue lines) X-ray emission coefficients, respectively, as function of temperature for several metallicities.
From the results of the simulations we obtain the density, temperature and metallicity in every computational cell and perform a bilinear interpolation to get χ, and then use it to determine the emissivity for the two energy bands. The contribution of all cells are then added to compute the total X-ray luminosity both for soft and hard X-rays.
RESULTS
The colour maps of Figures 3 and 4 show the density and temperature at three different evolutionary times. These were chosen to show the effect of the SN explosion in the X-ray emission (see the next section). We present a time slightly before the SN explosion (top row), at the peak of luminosity after the explosion (middle row) and once the total luminosity has diminished back to a value near its pre-SN level (bottom row). The columns correspond to three different models: in the left and central columns the SN ocurrs at the centre of the cluster, without thermal conduction, and with thermal conduction respectively, and in the rightmost column the SN is 10 pc off-centre (the position of the SN is indicated by a star in the top row). Following the time sequence in the columns of this figure, it can be seen that the SN ejecta reach the edge of the wind bubble and push it further into the ambient medium. Due to the particular position of the stars in these models, the gas distribution inside the wind bubble favors the expansion of the SN eject towards the upper right corner of the simulation box, and thus the blowout is more pronounced in this direction, the effect being larger if the SN explodes off-centre (at the edge of the star cluster; see the rightmost panels).
Soft X-ray emission
Following the pressure driven model discussed by Chu & Mac Low (1990), the soft X-ray luminosity can be estimated from LX = 3.29I(τ )ξL where ξ is the gas metallicity, L37 = Lw/10 37 where Lw is the mechanical luminosity of the cluster (in erg s −1 ), n0 the interstellar medium density and t6 is the cluster lifetime in Myr. In all the models presented here, the mechanical energy injected by the winds is 1.1 × 10 37 erg s −1 . With this mechanical energy the total X-ray luminosity for the stellar wind contribution is ∼ 10 33 erg s −1 (see eq. 9, and also Chu & Mac Low 1990), for an interstellar medium density of 2 cm −3 and metallicity of 0.3 Z after an evolution time of 2 × 10 5 yrs.
We have computed the soft X-ray emission for all the models at 10 4 yr intervals. Figure 5 shows the evolution of the total soft X-ray luminosities for all models without thermal conduction and with the SN placed at the centre of the star cluster. The red lines show the models with uniform metallicity, and the blue ones with variable metallicity.
The X-ray luminosity before the supernova event is in agreement with the value predicted by Chu & Mac Low (1990). Shortly after the supernova explosion the luminosity increases dramatically. We calculated the time interval in which the soft X-ray luminosity remains above 10 34 , 10 35 and 10 36 erg s −1 (∆ts,34, ∆ts,35 and ∆ts,36, respectively). The maximum luminosity achieved and these time intervals are shown in Table 2.
From Figure 5, we can see that the general shape of the luminosity curve after the SN explosion is quite similar in all the models. All models reached the same maximum luminosity of ∼ 3 × 10 35 erg −1 , and have ∆ts,34 ∼ 6 × 10 4 yr and ∆ts,35 ∼3×10 4 yr. In these models, in which the SN explosion occurs at the centre of the stellar distribution, luminosities above 10 36 erg s −1 are never reached. Our results show only small differences in the soft X-ray emission between models with uniform metallicity and those with variable metallicity. The similarity of the emission across the models indicates that the emission is dominated by swept Figure 3. Density maps at different stages of the evolution and for different models. The columns correspond to three distinct models: in the panels of the left column, the SN explosion occurs at the centre of the cluster (model sr0tsn5zv; panels a, d and g); those of the central column show the same model but including thermal conduction (sr0tsn5zvC; panels b, e and h); and in those of the right column the SN explosion occurs 10 pc off-centre (sr10tsn5zv; panels c, f and i). For all models shown here, the SN explosion occurs at t = 5 × 10 5 yr and the metallicity of the gas varies across components, as discussed in the text. The rows correspond to three relevant evolutionary stages: just before the SN explosion (top row), at the peak of X-ray luminosity (middle row), and once the luminosity has approximately returned to its original level (bottom row). The position of the SN is marked with a star in the panels of the top row. The spatial scale is the same across all panels and is shown in panel g.
up ISM material. This is partly because the high metallicity gas (winds and SN) is kept at a temperature too high for thermal soft X-rays to be important. Thermal conduction allows energy transport with reduced bulk motions, resulting in a denser inner region. Weaver et al. (1977) estimated a significant increase in the soft X-ray luminosity (up to 2 orders of magnitude) with respect to models without thermal conduction. Figure 6 displays the evolution of the soft X-rays luminosities for the 2 models with thermal conduction (sr0t2e5zvC and sr0t5e5zvC, magenta lines), and their counterpart without thermal conduction (sr0t2e5zv and sr0t5e5zv, blue lines). As can be seen, thermal conduction does increase the maximum luminosity of the models, but only by a factor of ∼ 1.25, and the luminosity returns to a value a factor of 2 larger after the SN explosion. The time of emission above 10 34 erg s −1 also increases by a similar factors of 1.2 and 1.6 for the SN imposed after 2 × 10 5 yr and 5 × 10 5 yr, respectively. Approximately the same timespan increase is found for emission above 10 35 erg s −1 ; see Table 2. These increments in the time interval with emission above 10 34 and/or 10 35 erg s −1 will enhance the chance of such luminosities being observed.
We can see that the inclusion of different metallicities and/or thermal conduction induces only small discrepancies in the soft X-ray emission.
From this models it is clear that the supernova explosions are a crucial ingredient for the thermal X-ray emission. The presence of a SN can explain the extra X-ray luminosity observed in several superbubbles. However, when the supernova event occurs in the centre of the cluster, the soft X-ray luminosity only reaches a few times 10 35 erg s −1 , still falling short of some of the observed values (e.g., those of Jaskot et al. 2011). We find that models with a centred explosion seem to still be underluminous.
A possibility that results in luminosities above 10 36 erg s −1 is to place the SN at a distance from the centre of the star cluster. For these reason we have included models sr5tsn5zv and sr10tsn5zv where the SN occurs at RSN = 5 and 10 pc from the star cluster centre, respectively, both at tSN = 5 × 10 5 yr. Figure 7 shows the soft X-ray luminosities for these two models compared to a model with a SN placed at the cluster centre (sr0tsn5zv).
We can see that the X-ray luminosities increase when the SN explodes closer to the edge of the bubble. The maximum soft X-ray luminosity increases by a factor of ∼ 1.25 between the model with supernova explosion at RSN = 0 Figure 5. Evolution of the total soft X-ray luminosities for models with the supernova explosion occuring at the centre of the star distribution. The red lines are the models with homogeneous metallicity (for a supernova event at t=2, 3, 5, and 7.5×10 5 yr dash-dotted, solid, dot-dash-dotted and dashed lines respectively). The blue lines are the models with different metallicities (Z ISM = 0.3 Z , Z wind = 3 Z and Z SN = 10 Z for the interestellar medium, stellar wind and supernova explosion, respectively, for SN explosions at t=2, 3 and 5×10 5 yr). and RSN = 5 pc and a factor of ∼ 4 when the SN explodes near the cluster radius (RC = 10 pc), reaching a maximum luminosity of Lmax = 1.23 × 10 36 erg s −1 . For model sr10tsn5zv, the only one that reached 10 36 erg s −1 , the time interval spent above 10 36 erg s −1 was 30 kyr. In addition, this last model predicts a time spent above 10 35 erg s −1 of 72 kyr, and one above 10 34 erg s −2 of 14 kyr. These numbers are ∼ 3 times larger than those of the model with the supernova explosion occurring at the centre of the star cluster.
From these results we can see that, on the one hand, for the case of the SN at the cluster centre the soft X-ray luminosity increase is not very sensitive to the time at which it is detonated. The luminosity increase is only slightly larger Figure 6. Evolution of the total soft X-rays luminosities for models with supernova explosion in the centre of the star distribution with inhomogeneous metallicity. The magenta line are the models with thermal conduction and the blue lines are the models without thermal conduction process (for supernova event at t=2, and 5 ×10 5 yr dashed and solid lines, respectively). Figure 7. Evolution of the total soft X-rays luminosities for models with supernova explosion at t=5×10 5 yr, and with inhomogeneous metallicity. The blue,red and olive lines are the model with supernova event in R=0, 5 and 10 pc, respectively.
for SN explosions that occur later in the evolution of the superbubble. On the other hand, when the SN is off-centre the luminosity increase depends considerably on the distance to the centre of the cluster.
The color maps of Figure 8 show the soft X-ray emissivity in the same layout as in Figure 3. Note that the blowout region (located at the upper right corner of the simulation box) provides the largest contribution to the luminosity increase seen 30 kyr after the explosion (middle panels). By 200 kyr after the explosion (bottom panels), the emissivity has almost returned to values comparable to its pre-SN level; however, due to the now larger X-ray emitting volume, the luminosity remains slightly above the original level (see Figure 7).
An interesting exercise is to compare the predicted luminosity statistics of our simulations with those of observed superbubbles. For this purpose we have taken a sample of Figure 8. Comparison of the X-ray emissivity in the soft (color maps, with the shown logarithmic scale given in erg s −1 cm −2 sterad −1 ) and hard (contours, logarithmically-spaced levels from 10 −11 to 10 −8 erg s −1 cm −2 sterad −1 ) bands at the three different stages shown in Figure 3. 26 bubbles with luminosities greater than 10 34 erg s −1 from the literature (Oey 1996;Jaskot et al. 2011;Reyes-Iturbide et al. 2014;Dunne et al. 2001). Out of these, 18 have luminosities above 10 35 erg s −1 while only 3 are observed with L soft > 10 36 erg s −1 . We can use our numerical results to predict how many bubbles out of these 26 should have luminosities above these two levels. In order to do this, we computed, from Table 2, the ratios of the times spent above these levels to the overall time where luminosity is above 10 34 erg s −1 for model sr10tsn5zv (the only one that reaches 10 36 erg s −1 ). We find that ∆ts,36/∆ts,34 ∼ 52.6 % and ∆ts,35/∆ts,34 ∼ 21.9 %. Thus, assuming that all bubbles in this observed sample reach a luminosity of 10 36 erg s −1 at some point in their evolution, this model predicts that about 14 should have a luminosity above 10 35 erg s −1 while about 6 should be observed above 10 36 erg s −1 . Though the values do not coincide exactly, they reasonably match the luminosity ratios of L36/L34 ∼ 12 % and L36/L35 ∼ 70 % observed in the sample. Here, Ln is a soft X-ray luminosity that is greater than or equal to 10 n erg s −1 .
There could be several explanations for this difference. For one, it is hard to judge whether this small sample of superbubbles is representative of the general population, and thus some variability can be expected in the statistics. At the Table 3. Maximum hard X-ray luminosity and time intervals in which the hard X-ray emission remains above 10 33 and 10 34 erg s −1 . same time, our numerical results suggest that the position of a supernova explosion occurring inside the bubbles determines whether a luminosity of 10 36 erg s −1 is reached at all during their lifetimes. Thus, if not all of the observed bubbles have had off-centre SN explosion, it is to be expected that fewer of them would be observed above 10 36 erg s −1 than what our models predict.
Hard X-ray emission
Hard X-ray emission is produced in the hottest regions inside the bubble where individual winds interact, when the gas flow is faster than a ∼ 1000 km s −1 , as well as during the early stages of the SN remnant evolution and where the cluster wind and/or the SN remnant are heated by the reverse shock. Thus one should expect that metallicity should have a significant effect on the hard X-ray emission.
In Table 3 we show the maximum thermal hard X-ray luminosity, and the time intervals for which the luminosity remains above 10 33 erg s −1 (∆t h,33 ) and 10 34 erg s −1 (∆t h,33 ). Figure 9 shows the evolution of the thermal hard X-rays luminosity for all models with the SN placed at the centre. In the models with different metallicities all three components exhibit maximum luminosities (∼ 3 × 10 34 erg s −1 ) that are ∼ 3 times larger than those of the models with homogeneous metallicity (∼ 1.0 × 10 34 erg s −1 ). We have to mention that the thermal hard X-ray emission produced inside the star cluster (regions of wind collisions) is underestimated by the models with homogeneous metallicity due to the rather low metallicity of the wind sources (0.3 Z ). In the models with the variable metallicity the wind is injected with a more appropriate metal content (3 Z ), thus the hard X-ray luminosity in these models should be closer to reality.
We can see from Table 3 that the maximum hard X-ray luminosity is significantly larger in the models with variable metallicity, typically an increase of ∼ 3.5 times. The time interval that the emission remains above 10 33 erg s −1 is similar, although slightly larger in the models with more realistic Figure 9. Same as Figure 5 for hard X-rays luminosities. Figure 10. Same as Figure 6 for hard X-rays luminosities. metallicity. In contrast, the time that the emission remains above 10 34 erg s −1 is much larger (a factor of ∼ 4) than that obtained in the models with homogeneous metallicity.
An important fact to notice is that the ratio of the maximum luminosities in soft-Xrays to hard X-rays is on the order of 10. Velázquez et al. (2013) explored the ratios between soft and hard X-ray emission in the young star cluster M 17. This particular cluster is partially immersed in the cluster parental cloud, and their models resulted in a ratio of soft to hard X-rays of two orders of magnitude. From our models we notice that the time intervals of high luminosity for soft X-rays (∆ts,34 and ∆ts,35) are larger than those obtain for hard X-rays (∆t h,33 and ∆t h,34 ) This shows that very young star clusters with a SN event can produce hard X-ray luminosities that are only an order of magnitude fainter than the soft X-rays. However, this happens only for a short interval at the earlier stages of the SN remnant. After that the hard X-ray emission drops abruptly. Figure 10 shows the evolution of the hard X-ray luminosity for the two models with thermal conduction and their counterpart without thermal conduction. The maximum luminosities and the time intervals of high hard X-ray emission are remarkably similar in spite of the thermal conduction. Although small differences can be seen, the position of the supernova explosion in the cluster does not have a significant influence in the overall and maximum thermal hard X-ray luminosity (see Figure 11).
It is also intresting to note that for an SN exploding at the centre of the cluster the highest luminosity achieved is much lower if the explosion occurs at later times (c.f. the long dashed curve in Figure 9 to the others).
The distribution of the hard X-ray emission before the SN explosion, at peak luminosity, and after the luminosity returns to its previous level can be seen as the (logarithmically spaced) contour levels in Figure 8. The emission during peak luminosity (middle row) is slightly more extended in the case where the SN is detonated off-centre (panel f), but returns to being concentrated inside the star cluster after the effect of the explosion has had time to decay (bottom row). The impact of thermal conduction on the hard X-ray emission is also evident. As can be seen, before (top row) or some time after (bottom row) the SN explosion the hardband emission is at a much higher level in the cases without thermal conduction (left and right columns). However, during the luminosity peak (middle row) all models display important hard X-ray emission regardless of the inclusion of thermal conduction. The difference lies mainly in that in the case including thermal conduction (panel e) the emission is slightly more centralised than in the cases without.
We must note that the resolution used in the models is not enough to capture all the details of the flow. The use of higher resolution allows larger compression factors as well as more small scale structure inside the bubbles. To estimate the uncertainty in the X-ray luminosities due to poor resolution we have taken a test case (model sr0tsn5z0), and reproduce the setup in the Walicxe-3d code (Toledo-Roy et al. 2014). This code has adaptive mesh refinement (AMR), which allows to increase the resolution at a lower computational cost, but the thermal conduction is not fully implemented. We ran the test case at an equivalent resolution of 512 3 and 1024 3 cells, and while the details of the flow are different, the integrated X-ray luminosities seem to reach convergence. The peak luminosity in the higher resolution runs is a factor of ∼ 2 larger than in the 256 3 model sr0tsn5z0. And the times above 10 34 , 10 35 , and 10 36 erg s −1 are larger by a factor of ∼ 1.5. All the results presented above have an uncertainty of this order of magnitude due to the limited resolution.
COMPARISONS WITH OBSERVATIONS
It is useful to compare our numerical models to the observations of particular superbubbles. We have thus turned our attention to four superbubbles located in the Large Magellanic Cloud (LMC): N70, N185, DEM L50 and DEM L152. These superbubbles have some particular features (as we will show) that make them compatible with our results.
N70 is a superbubble with a radius of approximately 53 pc. According to the observations there is a SN located closer to the center than the edge of the cluster. The Xray luminosity reported by Reyes-Iturbide et al. (2014) is about 2.4(±0.4)×10 35 erg s −1 . We can see that our numerical results are in good agreement with these observations, in particular the models with the SN explosion in the center of the cluster.
The case of N185 is quite similar to that of N70. N185 has a spherical shape with an approximate radius of 43 pc (Oey 1996). The X-ray luminosity obtained by Reyes-Iturbide et al. (2014) is 2.1 (±0.7)×10 35 erg s −1 . Following Rosado et al. (1982), a possibility to explain the high velocity of this superbubble is that a SN explosion ocurred. From its spherical shape, we conclude that the SN explosion must be located near the centre of the cluster. As in the previous case, our numerical models are in good agreement with the observed X-ray luminosities.
Two other interesting cases are DEM L50 and DEM L152. These are two superbubbles with very intense X-ray emission. According to observations DM L152 has a radius of approximately 50 pc (Jaskot et al. 2011), and DEM L50 has roughly the same radius (Oey 1996). Jaskot et al. (2011) reported an X-ray luminosity in the range 2.0-4.0×10 36 erg s −1 for DEM L50 and an emission of 5.4-5.7×10 35 erg s −1 . These two superbubbles contrast with N70 and N185 in that they contain an off-centre SN explosion, which is clearly distinguishable in the observations. Only numerical model sn10tsn5zv predicts a luminosity comparable to the observed values. The luminosities predicted by other models are not high enough to match with these observed values.
In order to better compare with observations, we have calculated the hardness ratio for three of our models (see Figure 12). These models are those discussed in Section 3: two with a centred SN explosion that differ in the presence of thermal conduction, and the third with the SN explosion 10 pc off-centre. Following Jaskot et al. (2011), the hardness ratio is defined as: where H is the flux energy in the 2-10 keV energy band (corresponding to hard X-rays) and S is the flux energy in the 0.1-2 keV energy band (corresponding to soft X-rays). After computing the fluxes and obtaining Q for each cell in the simulation, we integrate along the z axis in order to project the result into a 2D map, assuming that the Xray absorption due to the material inside the bubble can be neglected.
In Figure 12 we observe that, before the SN explosion occurs, in models without thermal conduction (left and right columns) the hardness ratio peaks at ∼ −0.55 at the centre of the stellar distribution, and decreases as we approach the edge of the cluster (the shell of swept-up ISM material emits mainly soft X-rays). In the case of the model that includes thermal conduction (middle column), we observe that Q ∼ −1. in most of the bubble, indicating that hard X-ray emission is largely negligible. As a result, the material ejected by the stars is cooled quickly from hard X-ray emitting temperatures (10 8 K) to temperatures in the range 10 5 -10 6 K where soft X-ray emission is favored.
The SN explosion drastically alters the hardness maps for the models without thermal conduction. The SN shockwave sweeps the cluster volume, devoiding the center of the bubble of hard X-ray emitting gas and forming regions with Q ∼ −0.6 closer to the edge of the bubble. In the model with thermal conduction, the hard X-ray emission is small to begin with, and the effect of the explosion on the hardness ratio is not as noticeable.
The predicted Q values obtained in our numerical models are similar to those obtained by Jaskot et al. (2011) for DEM L50 and DEM L152. Nevertheless, the specific details and assumptions of our simulations make it hard to establish direct comparisons to specific observed bubbles. In order to use the hardness ratio to predict some of the physical processes that occur or have occurred in the super-bubbles we would need to separately simulate the specific physical details, such as the position and mass and energy injection rates of each star, the ISM density, of each particular bubble, which is out of the scope of this work.
CONCLUSIONS
In this paper we present 3D hydrodynamical models of the evolution of the soft and hard thermal X-ray luminosities produced inside superbubbles driven by massive stellar winds including the effect of a supernova explosion and thermal conduction.
In all models we include the injection of mass and energy by a cluster of wind sources and a single SN event. We have varied the position of the SN with respect to the centre of the cluster as well as the detonation time. Also, we have worked out models with a uniform metallicity and models in which the environment, the winds and the SN have different metallicities. The metallicities are used to calculate the radiative cooling rate, and has an effect on the emissivity in X-rays. We have also taken into account the effects of thermal conduction in two of the models.
In the models with different metallicities we used Z = 0.3 Z for the environment, 3 Z for the winds and 10 Z for the SN ejecta. Our models show that the contribution of the metallicity of the winds and the supernova remnant is negligible for the soft X-ray emission of superbubbles, but becomes important for the hard X-ray component. In these models the ratio of soft to hard maximum luminosity can be as extreme as 10 (i.e. the hard X-ray luminosity reaches 10% of the one for soft X-rays).
The models with thermal conduction result in a noticeable increase in the total luminosity of soft X-rays, by a factor of ∼ 1.25. However, this factor is smaller than the two orders of magnitude difference predicted in the standard model of Weaver et al. (1977) and Chu & Mac Low (1990). The differences are likely to come from the fact that the standard model of Weaver et al. (1977) considers just a single star, and the extension to a star cluster in Chu & Mac Low (1990) and Chu et al. (1995) does not account for the cooling of the gas due to the interaction of the stellar winds. Thermal conduction has a slightly larger effect on the total integrated emission of hard X-rays, increasing the luminosity by a factor of ∼ 2.6.
The most important contribution to the emission of soft and hard X-rays is produced by the injection of mass and energy by supernova explosions. In soft X-rays the luminosity increases by up to two orders of magnitude when we consider a supernova explosion placed at the cluster centre, and up to three orders when it explodes at the edge the star cluster.
Another important factor to consider is the time during which the luminosity remains high (i.e. observable). We show that when off-centre supernova events occur (close to the shell) the luminosity can increase by one or two orders of magnitude above that predicted by the standard model without SN, and that it can be maintained by a few tens of thousands of years. Indeed, as the supernova explosion occurs closer to the shell of swept up ISM, the maximum luminosity of soft X-rays as well as the time interval during which luminosity is enhanced increase.
An important increase in the maximum soft X-ray luminosity is produced when the SN ejecta collide with the dense shell of swept up ISM gas left behind by its interaction with the cluster wind. In these cases X-ray luminosities of 10 36 erg s −1 can be achieved. On the other hand, superbubbles where the SN explosions have not taken place near the shell, such as N 70 and N 185 (Jansen et al. 2011 andReyes-Iturbide et al. 2014), have lower X-ray luminosity, and can be explained using our models with a slightly off-centre SN.
In clusters without SN events, or with a SN placed at the centre of the cluster, the contribution to the luminosity made by the SN is hard to observe, in particular because the observable flux increase in the soft X-ray emission lasts for a short time. This could be happening in massive stellar clusters in the Galaxy, such as Arches, Quintuplet and NGC 3603, that have a hundred massive stars with a total observed X-ray emission of ∼ 10 34 erg s −1 . | 2015-04-11T00:44:09.000Z | 2015-04-11T00:00:00.000 | {
"year": 2015,
"sha1": "89f2dc9cce8ac0dedbf3494be254accc43f3aeb7",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/450/3/2799/18509263/stv795.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "89f2dc9cce8ac0dedbf3494be254accc43f3aeb7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258939137 | pes2o/s2orc | v3-fos-license | Anaesthetic Techniques and Strategies: Do They Influence Oncological Outcomes?
Background: With the global disease burden of cancer increasing, and with at least 60% of cancer patients requiring surgery and, hence, anaesthesia over their disease course, the question of whether anaesthetic and analgesia techniques during primary cancer resection surgery might influence long term oncological outcomes assumes high priority. Methods: We searched the available literature linking anaesthetic-analgesic techniques and strategies during tumour resection surgery to oncological outcomes and synthesised this narrative review, predominantly using studies published since 2019. Current evidence is presented around opioids, regional anaesthesia, propofol total intravenous anaesthesia (TIVA) and volatile anaesthesia, dexamethasone, dexmedetomidine, non-steroidal anti-inflammatory medications and beta-blockers. Conclusions: The research base in onco-anaesthesia is expanding. There continue to be few sufficiently powered RCTs, which are necessary to confirm a causal link between any perioperative intervention and long-term oncologic outcome. In the absence of any convincing Level 1 recommending a change in practice, long-term oncologic benefit should not be part of the decision on choice of anaesthetic technique for tumour resection surgery.
Introduction
The global disease burden of cancer is significant; it is responsible for 10 million deaths globally, which is an increase of 21% since 2010 [1]. This trend is projected to continue until at least 2040 and is the result of globally aging populations [2]. It is well recognised that deaths from cancer do not fully illustrate the impact of cancer on patients and their families and on global health services.
Management of solid tumours can take the form of medical or surgical treatment, or a combination. As many as 60% of solid tumours are amenable to primary resection surgery with curative intent. Over 80% of patients with a cancer diagnosis will receive anaesthesia and surgery at some point in their disease journey, including diagnostic or palliative procedures.
Surgical intervention itself may play a role in oncological outcomes. While removal of the primary tumour is the mainstay of treatment for many solid cancers, inadvertent displacement of minimal residual disease, such as microscopic tumour cells, into the circulation during resection could potentially facilitate development of metastases [3]. In addition, the surgical stress response, which is characterised by modulation of the immune, inflammatory and adrenergic systems, may influence rates of cancer metastases and disease progression [4]. Complex interactions between multiple immune factors and metastatic deposits play a role in the propagation of metastatic disease. These interactions present potential therapeutic avenues to minimise the risk of metastatic disease becoming established at the time of cancer surgery. This poses interesting questions regarding the impact of surgery on cancers and potential avenues for pharmacological intervention [5] ( Figure 1). col. 2023, 30,2 disease progression [4]. Complex interactions between multiple immune factors and metastatic deposits play a role in the propagation of metastatic disease. These interactions present potential therapeutic avenues to minimise the risk of metastatic disease becoming established at the time of cancer surgery. This poses interesting questions regarding the impact of surgery on cancers and potential avenues for pharmacological intervention [5] (Figure 1). Anaesthetic techniques and strategies have also been implicated in recent years as both potentially harmful techniques that worsen oncological outcomes and potential therapeutic avenues that can minimise the risk of cancer recurrence and metastases [6]. Techniques examined include, but are not limited to, intra-operative opioid use, volatile inhalational anaesthesia and propofol total intravenous anaesthesia [TIVA], regional anaesthesia, the systemic use of amide local anaesthetics, dexamethasone and use of dexmedetomidine (a highly selective alpha-2 adrenergic agonist). To facilitate further research, a workgroup has determined Standardised End-Points (StEPs) to standardise measured primary and secondary outcomes in clinical trials of the effect of peri-operative interventions on oncologic outcomes [7]. Here, we review current evidence around a number of anaesthetic-analgesic techniques in relation to long-term cancer outcomes.
Opioids
The use of opioids is a mainstay of anaesthesia and perioperative analgesia in patients undergoing cancer surgery. Of primary interest are the immune-modulating effects of opioid medications, and how they may be of relevance in cancer immunology. The direct and indirect effects of opioids on cancer cells and the effects on cells involved in antitumour immunity, such as NK cells, macrophages and T-cells, are well described in a paper from Boland and Pockley [8]. Theoretically, opioid receptor expression on tumour cells can be implicated in cancer cell proliferation and cancer migration; therefore, the opioids we administer therapeutically after surgery could interact with tumour opioid receptors and increase tumour activity [9]. As described in a retrospective study [10], in metastatic prostate cancer, increased tumour MOR expression resulted in reduced progression-free and overall survival.
As a result, there has been a new focus on detailed genomic analyses of patients' excised tumour tissue and how individual patient tumour gene expression may interact Anaesthetic techniques and strategies have also been implicated in recent years as both potentially harmful techniques that worsen oncological outcomes and potential therapeutic avenues that can minimise the risk of cancer recurrence and metastases [6]. Techniques examined include, but are not limited to, intra-operative opioid use, volatile inhalational anaesthesia and propofol total intravenous anaesthesia [TIVA], regional anaesthesia, the systemic use of amide local anaesthetics, dexamethasone and use of dexmedetomidine (a highly selective alpha-2 adrenergic agonist). To facilitate further research, a workgroup has determined Standardised End-Points (StEPs) to standardise measured primary and secondary outcomes in clinical trials of the effect of peri-operative interventions on oncologic outcomes [7]. Here, we review current evidence around a number of anaesthetic-analgesic techniques in relation to long-term cancer outcomes.
Opioids
The use of opioids is a mainstay of anaesthesia and perioperative analgesia in patients undergoing cancer surgery. Of primary interest are the immune-modulating effects of opioid medications, and how they may be of relevance in cancer immunology. The direct and indirect effects of opioids on cancer cells and the effects on cells involved in antitumour immunity, such as NK cells, macrophages and T-cells, are well described in a paper from Boland and Pockley [8]. Theoretically, opioid receptor expression on tumour cells can be implicated in cancer cell proliferation and cancer migration; therefore, the opioids we administer therapeutically after surgery could interact with tumour opioid receptors and increase tumour activity [9]. As described in a retrospective study [10], in metastatic prostate cancer, increased tumour MOR expression resulted in reduced progression-free and overall survival.
As a result, there has been a new focus on detailed genomic analyses of patients' excised tumour tissue and how individual patient tumour gene expression may interact with perioperative opioid use during tumour resection surgery and subsequent oncological outcomes. This focus is welcomed because any true effects of anaesthetic-analgesic interventions during cancer surgery may be measurable only in defined tumour subtypes and on individual patients' tumour genomic expression [11].
However, in a retrospective study of over 8000 patient tumours samples [12], utilising the Cancer Genome Atlas, no correlation between oncological recurrence and opioid receptor expression on a wide variety of tumour cells was shown. Additionally, a further retrospective cohort study [13], examining patients with stage I-III colorectal cancer demonstrated that despite increased expression of Opioid Growth Factor Receptor and mu-opioid receptor (MOR), this did not translate into an association with altered recurrence rates. Furthermore, a randomised control trial (n = 146) studying patients undergoing radical prostatectomy for intermediate and high D'Amico risk prostate cancer, were randomised to receive either opioid-free or opioid-based anaesthesia. No statistically different biochemical recurrence rates or recurrence-free survival was observed, but this trial was underpowered to detect the latter [14].
A different approach has been taken by a New York group, who evaluated the influence of patient-specific tumour gene expression on responses to intraoperative interventions during tumour resection. A retrospective study focusing on intraoperative opioid use during primary resection of stage I-III colon adenocarcinoma found that tumour recurrence was lower in patients with higher cumulative intra-operative opioid dose. In addition to this, immunohistochemistry analysis identified that, in tumours with diminished DNA mismatch repair (MMR) ability, there was a stark reduction in recurrence compared with tumours with preserved DNA MMR [15]. A separate retrospective analysis [16] studied the same principle, evaluating triple negative breast tumours from 1143 patients who had undergone surgical resection; the results showed that the genetic make-up of their individual tumours expressed some tumour opioid receptors. This analysis illustrated not only downregulation, or even absence, of pro-tumour receptors in the presence of opioid agonism, but also an association with the upregulation of anti-tumour receptors. Taken together, these findings suggest an association between a protective effect of intraoperative opioids and recurrence-free survival in triple-negative breast cancer, but not with improved overall survival.
Another retrospective study of 239 patients undergoing resection of hepatocellular carcinoma [17] examined the effect of low-dose versus high-dose post-operative morphine needs on oncologic outcomes. High dose (86 mg morphine equivalent) was deemed above the median value of opioid use across both arms of the study. Patients receiving the highdose morphine had an increased all-cause mortality, but this did not correlate with cancer recurrence risk. Beyond its retrospective design, this study had further limitations, including small sample size, lack of tumour genomic testing and not accounting for complexity of surgery, which could explain the increased intra-and post-operative opioid requirements. A separate retrospective study which utilised tumour genomic sequencing [18] examined 740 patients with stage I-III lung adenocarcinoma. This demonstrated a varied impact of higher intra-operative opioid use depending on tumour gene expression.
Despite the widespread use of opioids during surgical resection of solid tumours, the level of understanding of how these medications influence oncological outcomes remains suboptimal. While laboratory research conducted a decade ago initially suggested that opioids might have a detrimental impact on cancer, facilitating tumour cell survival, new research examining tumour sub-types and intra-tumoral gene expression highlight how nuanced the potential effect of perioperative opioids during cancer resection surgery on oncological outcomes may be.
Regional Anaesthesia Techniques
Regional anaesthesia techniques, both central neuraxial and peripheral nerve blocks, have been associated with improved oncological outcomes in some observational studies. The basis of this hypothesis is that regional anaesthesia preserves immune function and reduces surgical stress peri-operatively, reducing postoperative inflammation and thus reducing the risk of cancer recurrence by inhibiting pro-tumour pathways [19]. Potential mechanisms for modulation of this pro-tumour pathway are examined in depth in a recent paper by Li et al., who described the effects that local anaesthetics may have on tumour cells directly, catecholamine release, voltage gated sodium channels, systemic angiogenic factor concentrations and a reduction in postoperative pain and opioid use, as well as how factors may influence oncological outcomes [20]. Furthermore, regional anaesthesia techniques allow for potential reduction in exposure to volatile anaesthetic agents, which some translational research suggests may be of benefit in reducing cancer recurrence.
A retrospective study from Danish national databases, which examined 11,618 patients with colorectal cancer who had surgery between 2004 and 2018 and were followed over a median duration of 58 months, found that epidural anaesthesia was not quite associated with lowered cancer recurrence rates compared with patients receiving GA alone (hazard ratio, 0.91; 95% CI, 0.82 to 1.02) [21]. A smaller retrospective study on 218 patients with pancreatic cancer who underwent resection with curative intent demonstrated no alteration to overall survival or recurrence rate when epidural anaesthesia was utilised (HR: 0.98; 95% CI, 0.78-1.24%; p = 0.87 and HR: 1.02; 95% CI, 0.82-1.27%; p = 0.85), respectively [22]. This data should be interpreted in the context of this study being underpowered.
However, after over a decade of conflicting findings from observational studies, a large, multi-centre randomised control trial evaluating the effect of paravertebral regional anaesthesia-analgesia versus volatile anaesthesia with opioid analgesia on oncologic outcomes was conducted in women undergoing primary breast cancer resection. Some 2108 patients with breast cancer were randomised to either paravertebral regional with propofol general anaesthesia or volatile general anaesthesia with opioid analgesia and followed up for a median of over 3 years The incidence of breast cancer recurrence was approximately 10% in both groups, indicating robust neutral findings [23].
A modest trial of 180 patients with colorectal cancer undergoing primary resection was performed [24], in which patients were randomised to either general anaesthesia plus opioid-based patient-controlled anaesthesia or general anaesthesia plus thoracic epidural anaesthesia. The primary outcome was the surrogate end-point of return to intended oncologic treatment (RIOT). They reported no difference in RIOT, which is often a confounding factor in these studies, because a delay in RIOT can worsen cancer prognosis. There were also neutral findings regarding the effect of these analgesic techniques on cancer recurrence, although this study was underpowered to evaluate this. Interestingly, this study utilised an epidural regime which included an opioid with local anaesthetic, thus adding another confounding factor to their study. An alternative may have been to use a pure local anaesthetic epidural infusion to eliminate opioid effect on the tumour.
A further RCT that examined 40 patients with advanced ovarian cancer, comparing intraperitoneal ropivacaine and 0.9% saline with the primary outcome of RIOT, demonstrated that the intraperitoneal ropivacaine group achieved the primary outcome significantly sooner (median 21 (inter-quartile range 21-29) vs. 29 (inter-quartile range 21-40) days; p = 0.021) [25]. Whether this surrogate outcome measure (RIOT) translates into a measurable benefit in patient-centric oncologic outcomes remains to be seen in a properly powered RCT.
Another RCT with delirium as its primary end-point (n = 1712) undertook long-term follow up (median 66 months) of its patients who had a variety of non-cardiothoracic and abdominal cancers and who had been randomised to either epidural and general anaesthesia or general anaesthesia alone for surgical resection [26]. Epidural anaesthesia reduced the 7-day incidence of delirium. However, there was no statistically significant difference between the groups in terms of overall survival (HR 1.07; 95% CI, 0.92 to 1.24; p = 0.408), cancer-specific long-term survival (HR 1.09; 95% CI, 0.93 to 1.28; p = 0.290) or recurrence-free survival (HR 0.97; 95% CI, 0.84 to 1.12; p = 0.692). This study has significant limitations, however, with 8% of included participants having non-cancer surgery, albeit evenly distributed between both groups. Secondly, the epidural anaesthesia group also received sufentanil in their epidural infusions, resulting in one group receiving a longacting opioid (general anaesthesia alone) and one group receiving a short-acting opioid (epidural anaesthesia group). This study also was not originally designed to examine long-term survival and is therefore underpowered to elicit subtle differences in cancer survival. Nonetheless, it signals no meaningful effect of epidural anaesthesia on long term oncologic outcomes.
A further randomised control trial examining 400 patients undergoing a Video-assisted Thoracoscopic Surgery (VATS) for lung cancer randomised patients to general anaesthesia with or without epidural anaesthesia over a median follow up of 32 months and found no significant difference in overall survival (HR 1.12; 95% CI, 0.64 to 1.96; p = 0.697) or recurrence-free survival (HR 0.90; 95% CI, 0.60 to 1.35; p = 0.608) between the groups [27]. The epidural infusion in this study also contained sufentanil.
Aggregating all these findings, it can be definitively concluded that, while regional anaesthesia undoubtedly has many benefits for patients undergoing cancer surgery, the evidence base now clearly indicates that it has a neutral influence on oncologic outcomes.
Propofol Total Intravenous Anaesthesia (TIVA) and Volatile Anaesthesia
Laboratory studies had indicated a signal that the effect of propofol on tumour cell biology, inflammation and immune function might be more favourable in preventing recurrence with propofol compared with volatile agents [28]. This has been supported by a number of observational clinical reports. Initially, a retrospective study including 7000 patients with various cancer diagnoses after propensity matching suggested an association between clinically significant improvement in survival with propofol TIVA, in comparison to inhalational anaesthesia with multivariate analysis demonstrating higher risk of death in the inhalational group (HR 1.46, 95% CI, 1.29 to 1.66) [29]. Meta-analyses since this initial retrospective study have supported this hypothesis, including 19 retrospective studies, which showed an association between propofol TIVA and improved disease-free survival versus inhalation anaesthesia [30]. This meta-analysis is compromised by its comprising multiple small retrospective studies.
A small randomised controlled trial that examined n = 210 patients demonstrated no statistically significant difference between propofol TIVA and volatile anaesthesia cohorts on postoperative circulating tumour cell counts (RR 1.27 [95% CI, 0.95 to 1.71]; p = 0.103) [31]. A smaller RCT (n = 153) studying colorectal adenocarcinoma and the impact of anaesthesia technique on circulating levels of Natural Killer immune cells and T-cells post-operatively found no difference between propofol TIVA and sevoflurane cohorts at 24 h (RR −2.6 [95% CI, −6.2 to 1.0]; p = 0.151) [32]. A small RCT examining peri-operative levels of markers NETosis (Neutrophil Extracellular Trapping (NETosis), a biomarker implicated in cancer progression and metastasis) in 40 patients with breast cancer demonstrated no difference between patients who had received regional anaesthesia or opioid analgesia during breast cancer resection [33].
As always, no number of observational studies can provide Level I evidence for a causal relationship between any anaesthetic technique and cancer recurrence. A number of RCTs are currently being conducted studying this area which will hopefully bring some clarity. Notably, the CAN study (Cancer and Anaesthesia), which focuses on breast cancer patients, is randomising breast cancer patients to propofol TIVA or sevoflurane and measuring long term cancer outcomes. Interim analysis from this study demonstrates no noted difference in overall survival; however, five-year follow up is not yet complete [34]. Recruitment is also currently ongoing for the VAPOR-C trial (Volatile Anaesthesia and Perioperative Outcomes Related to Cancer), which aims to recruit 3500 patients undergoing surgery for colorectal cancer or non-small cell lung cancer across multiple centres. This trial has disease-free survival as its primary outcome and is a 2 × 2 factorial design comparing both propofol TIVA with volatile anaesthesia and systemic lidocaine or placebo within each GA arm of the trial.
In summary, while a number of inherently limited retrospective studies have suggested benefits from propofol TIVA in overall survival, there is no data from any large RCT to date, which is necessary before any change in practice can be recommended.
Dexamethasone
Dexamethasone is a glucocorticosteroid often utilised during anaesthesia in the prevention of post operative nausea and vomiting. The potential impact this practice may have on oncological outcomes has been raised, yet clarity remains elusive. It is hypothesised that the immunosuppressive effects of a steroid could result in an increased likelihood of distant metastases. This complex signalling pathway remains poorly categorised however dexamethasone has been shown in a 2016 study to impact on immune cells with a lymphodepletive effect noted, primarily effecting CD4 + T cells but also CD8 + T cells, dendritic cells and regulatory T cells (Tregs) [35]. Alternatively, potential benefits of dexamethasone's anti-inflammatory and anti-angiogenesis properties may in fact inhibit cancer and include improved oncological outcomes with increased metastasis free survival.
This uncertain picture is illustrated in a study utilising xenograft mouse models [36] that examined the effect of glucocorticoids on breast cancer progression. They described the complex signalling pathway which dexamethasone has in different tumour cells and how this makes interpretation of oncological effects difficult. This study concluded that lowdose dexamethasone may have beneficial effects reducing tumour growth and mitigating risk of metastases, while high-dose dexamethasone may in fact cause harm, increasing the risk of breast cancer progression.
A retrospective, cohort study of 2628 patients who underwent breast cancer surgery found that the 8.5% of patients who received single dose dexamethasone had no change in risk of recurrence (HR 1.389; 95% CI, 0.904-2.132; p = 0.133) or mortality (HR 1.506; 95% CI, 0.886-2.561; p = 0.130) on propensity scoring [37]. A separate study of 373 patients with pancreatic ductal adenocarcinoma elicited similar results, concluding that there was no improvement in recurrence-free (17 vs. 17 months; p = 0.99) or overall (46 vs. 43 months; p = 0.90) survival amongst the 60% of patients who received dexamethasone [38].
Interestingly, a retrospective study of 185 patients with bladder cancer who underwent radical resection concluded that patients who received glucocorticoids had a shortened metastasis-free survival time (HR 1.790; p = 0.030) when the compound variable of intraoperative blood transfusion was excluded from the analysis [39].
In contrast, a recent, large retrospective study involving >30,000 patients who had a solid cancer resection found that, in cancer patients not amenable to immune modulator therapy, peri-operative dexamethasone was associated with decreased one-year mortality (HR 0.82; 95% CI, 0.69-0.96; p = 0.016) and cancer recurrence (Adjusted Odds Ratio 1.28; 95% CI, 1.18-1.39; p < 0.001) [40]. However, this does not prove a causal link, which requires an RCT. Therefore, while dexamethasone and its effects on oncological outcomes continue to be researched, there is currently little evidence justifying change in clinical anaesthesia practice on the basis of a benefit in cancer outcomes.
Dexmedetomidine
Dexmedetomidine is a highly selective alpha 2 adrenergic agonist which initially was licensed for sedation in intensive care units in 1999 and has since become more included in the realm of anaesthesia. Even at that time, it was known that dexmedetomidine preserved Natural Killer cell function peri-operatively, likely due to cortisol level suppression [41]. This preservation of NK cells was hypothesised as a mechanism for improving oncological outcomes during surgery, preventing cancer progression.
While dexmedetomidine theoretically makes sense as an adjunct during onco-anaesthesia due to its NK cell preservation and sympatholytic and anti-inflammatory properties, the evidence base does not support its adoption into clinical practice for oncological purposes. Additionally, the complex interactions between the immune system and tumour growth and metastases should be considered when considering any immune-modulating medication used during the high-risk period, from a metastatic point of view, that is cancer surgery.
A laboratory investigation utilising ovarian cancer xenograft mouse models, found that NK cell function recovered faster in the dexmedetomidine group and lowered tumour burden at four weeks [42]. However, an RCT involving 100 patients with uterine cancer demonstrated no favourable impacts on NK cells (p = 0.496) and no statistically significant difference in rates of recurrence (p = 0.227) or death within two years (p = 0.318) [43]. Given the small sample size, this study was underpowered to elicit subtle differences in recurrence. However, the rates of both end points were lower in the dexmedetomidine cohort; (16.3% vs. 8.7%) and (6.7% vs. 2.2%), respectively.
NSAIDs/COX 2 Inhibitors and Beta Blockers
NSAIDs and their potential impact on oncological outcomes have been extensively researched in laboratory studies and observational, retrospective studies. However, there remains a relative scarcity of well-powered prospective randomised control trials to justify adjusting anaesthesia practice regarding NSAIDs to improve oncological outcomes [45]. The anti-inflammatory properties exhibited by these drugs are suggested to reduce cancer cell resistance to common treatment modalities, such as chemo and radio-therapy, by inhibiting the cyclo-oxygenase 2 receptor, which is often over expressed on cancer cells. The expression of cyclo-oxygenase 2 receptors on cancer cells is shown to promote carcinogenesis mediated through its effects on cancer stem cell-like activity, apoptotic resistance, proliferation, angiogenesis, inflammation, invasion and metastasis of cancer cells [46].
Two relatively sizable studies that examined extended courses of NSAIDs after initial surgical management have failed to prove any benefit of protracted NSAIDs exposure. Firstly, a RCT including >2500 patients with stage 3 colorectal cancer were randomised to receive either Celecoxib 400 mg once daily or placebo for 3 years in conjunction with FOLFOX adjuvant chemotherapy [47]. This study elicited no difference in three-year disease-free survival (HR for disease recurrence or death, 0.89; 95% CI, 0.76-1.03; p = 0.12) or in five-year overall survival (HR for death, 0.86; 95% CI, 0.72-1.04; p = 0.13). A second RCT that examined >2600 patients with ERBB2 negative breast cancer also demonstrated no benefit in five-year disease-free survival (unadjusted HR 0.97; 95% CI, 0.80-1.17; logrank p = 0.75) with a treatment course of Celecoxib 400 mg once daily for a period of two years [48].
Recent RCTs have been attempting to add to the existing evidence base regarding the use of NSAIDs to improve oncological outcomes; however, these studies are difficult to interpret due to their small sample sizes. The first is a 34 patient study which randomised patients with colorectal cancer to receive either propranolol and etodolac (a CoX-2 inhibitor) for 20 days perioperatively and beginning 5 days prior to surgery, or placebo [49]. This study demonstrated a weakly favourable impact on tumour molecular markers of metastatic potential and also with a reduced rate of recurrence (p = 0.05) in the treatment cohort. A second RCT following 80 patients who underwent hepatectomy for hepato-cellular carcinoma classified patients into a treatment group of parecoxibsodium 40 mg and a control group of placebo. This suggested that disease-free survival was significantly longer in the treatment arm (19.0 months, 95% confidence interval [CI], 9.8-28.2 vs. 14.0 months, 95% CI, 8.1-19.9; p < 0.05). Nonetheless, this did not translate to significantly increased overall survival time (36.0 months, 95% CI, 13.4-58.9 vs. 14.0 months, 95% CI, 10.6-25.4; p > 0.05) [50].
While recent additions of randomised control trials to the evidence base surrounding use of peri-operative NSAIDs and beta-blockers and their potential oncological benefits associated are welcome, and appear somewhat promising, enthusiasm is significantly tempered by the modest size of both trials. This serves to highlight a potentially promising therapeutic avenue that should be further explored with sufficiently powered trials.
Encouragingly, an RCT from a number of Indian centres among women undergoing breast cancer surgery with curative intent has just been reported. This group randomised almost 1600 women to an active arm (who received infiltration of amide local anaesthetic lidocaine 0.5 mg.kg, up to 4.5 mg.kg body weight, 7-10 min prior to surgical excision "LA") and compared them to a control group that did not receive this lidocaine infiltration ("No LA"). Median follow-up time was >5.5 years (68 months). In LA and no LA arms, 5-year DFS rates were 87% and 83% (hazard ratio [HR], 0.74; 95% CI, 0.58 to 0.95; p = 0.017) and 5-year OS rates were 90% and 86%, respectively (HR, 0.71; 95% CI, 0.53 to 0.94; p = 0.019). The impact of LA was similar in subgroups defined by menopausal status, tumour size, nodal metastases, and hormone receptor and human epidermal growth factor receptor 2 status. No adverse effects from lidocaine were observed [51]. This is the first trial to report a positive difference of a single perioperative intervention on long-term oncologic outcomes and will encourage ongoing efforts among anaesthesiologists and other clinicians to complete other trials, testing the long-term oncologic effects of various perioperative interventions during primary cancer surgery, in the field of onco-anaesthesiology (Table 1).
Conclusions
While the research base examining onco-anaesthesia is expanding, it is still largely made up of laboratory investigations and observational retrospective studies which are inherently limited and cannot be used as the basis for a change in practice. There remains a relative paucity of sufficiently powered RCTs which are necessary to confirm a causal link between any perioperative intervention and long-term oncologic outcome. A summary of the main registered RCTs in this field is shown in Table 1. Results from ongoing RCTs, such as the CAN and VAPOR-C trial, are awaited. Future trials may require documentation of effects of anaesthesia-analgesia on precise subtypes of tumour, in addition to taking account of patient-specific tumour genomic analysis. In the absence of any convincing Level 1 that recommends a change in practice, long-term oncologic benefits should not be part of the decision when choosing an anaesthetic technique for tumour resection surgery. All currently available anaesthetic-analgesic techniques are valid for cancer patients, the choice of which should continue to be a shared decision between patient, anaesthesiologist and surgeon based on known risks and benefits. Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-05-28T15:06:33.131Z | 2023-05-26T00:00:00.000 | {
"year": 2023,
"sha1": "0211adedc78a75088e9204b68a1b3747f3ff0f5c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/curroncol30060403",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "06b103026613880329f7f1569315446b65271512",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
205685047 | pes2o/s2orc | v3-fos-license | Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Passive location techniques, in which signals travel from a source whose location is estimated to sensors whose positions are known, is widely used in different areas. Researchers employ passive location techniques for whale tracking 1,2 , structural health monitoring [3][4][5] , seismic/microseismic source inversion [6][7][8][9] , and seismic tomography [10][11][12][13] . Microseismic source location is a typical application of the passive location technique and, in conjunction with other geophysical tools such as muography 14 , can potentially provide valuable information about the lithosphere. Because of the characteristics of the received signals, the microseismic source location techniques in local monitoring operations generally use P-wave or S-wave or both arrival times to locate sources. Although several advanced picking-free techniques have been proposed, such as the source-scanning algorithm (SSA) by Kao and Shan 6 , the envelope stacking-based method by Gharti et al. 15 , the waveform coherence analysis by Grigoli et al. 16 , and many others 17,18 , these methods use a whole or partial seismic waveform and are therefore time consuming. Moreover, due to the complexity of the location procedure, these non-standard picking-free techniques often use grid-searching algorithms instead of the fast local searching algorithms, which can further decrease the location efficiency and resolution.
The picking quality of arrival times directly affects the accuracy of source location. At local scale, the near-field effects that exist in the seismic wavefield cannot be ignored, and therefore, the P-wave and S-wave are often intertwined 19 . This issue complicates picking and identifying seismic phases. Using current techniques, P-wave arrival times can be accurately picked for relatively high signal-to-noise ratio signals, such as the STA/LTA method 20 and the higher order statistics method 21 ; however, reliable picking of the S-wave is still problematic for local events in which the P coda overlaps with the S-wave 16 . Considering the low signal-to-noise ratio, even the picking of the P-wave is not satisfactory because the beginning of the P-wave may be concealed by the noise. Practical applications have also reported that sensors are likely to be triggered by S-waves but are wrongly assigned P-wave velocities 22 . These cases illustrate that the arrival times of both P-and S-waves may contain large picking errors (LPEs), especially when using automated picking programs.
Results
Synthetic tests. We use synthetic tests to verify the performance of the VFOM. Because of the non-repeatability and non-verifiability of real events in an opaque medium, it is difficult to measure the error between calculated and real sources. Therefore, synthetic tests, with controllable errors in their input data, such as arrival times and velocity structures, are flexible when comparing the performance of location methods under different conditions. Specifically, we arrange a 400 m × 400 m × 400 m cubic array with 8 sensors (or stations) at its corners to receive signals. Two "real sources", one inside the array and the other outside the array ( , respectively), generate the microseismic signals. To simulate the uncertainty of the velocity along different paths, the velocity of each path is generated randomly in the range from 4875 to 5125 m·s −1 . In addition, to simulate the small systemic picking errors, an extra random error term from − 2 ms to 2 ms is added to each arrival time. In the case of measured arrival times containing LPEs, a dramatic error (± 100 ms), which is far larger than the systemic picking error, is added to the arrival times with different probabilities. Thus, the simulated arrival times consist of velocity uncertainty, systemic picking errors and different LPE probabilities.
Four traditional location methods are applied for comparison with the VFOM. The objective functions of these four methods are given by equations (1)(2)(3)(4). For the sake of brevity, we define the four methods as TL2, TL1, DL2 and DL1, respectively. The TL2 and TL1 methods both use the residuals between the observed and theoretical arrival times to define their objective functions, and the only difference between them is that TL2 uses L2 norm whereas TL1 uses L1 norm. Differing from TL2 and TL1, DL2 and DL1 minimize the residuals between the observed and theoretical travel-time differences of station pairs. DL2 and DL1 are similar to the double-difference method in the form, but in fact are very different as DL2 and DL1 use the station pairs while the double-difference method uses event pairs 28 . The quasi-Newton algorithm, one of the most efficient and effective algorithms for solving unconstrained optimization problems, is employed to search for the optimum solutions. As is well known, when the objective function has more than one peak, local searching algorithms may converge to local solutions if improper initial Scientific RepoRts | 6:19205 | DOI: 10.1038/srep19205 values are settled. Thus, the result depends on the initial value. To eliminate this dependence, 50 initialize-search tries are made in each location process, and the solution with the least objective value is selected as the final solution. Like many optimization algorithms requiring derivatives, the searching algorithm used in this paper is time saving. For instance, one may require several hours for a grid searching process 16 but less than 1 second for an entire VFOM process using a popular PC machine.
To obtain reliable statistical conclusions, we repeat each event location process by changing the picking errors and the propagation velocity along each path randomly 100 times and then collect the absolute location errors. The results of the 100 simulations without LPEs are shown in the boxplots in Fig. 1a. The location errors for both the VFOM and traditional methods are very similar for both the in-array and out-array sources. In other words, the VFOM performs as well as traditional location methods when no LPEs are present in the arrival times. However, the performance of the VFOM differs from that of traditional methods in the presence of LPEs, as shown in Fig. 1b. The VFOM can still locate accurately and stably, whereas the results from the traditional methods become unreliable. For example, the average location error obtained by the VFOM stays 10~20 m while that of traditional methods reaches to hundreds even thousands of meters. These synthetic tests demonstrate that for a single-source location using arrival times with LPEs, the VFOM, due its stability and accuracy, performs better under these circumstances.
Moreover, the VFOM offers a self-evaluation strategy for picking quality by calculating the valid ratio, which is defined as the percentage of events successfully located using stopping criteria A (SC-A, as described in "Methods"). The valid ratio is the proportion of the optimized objective values greater than the pre-determined threshold. The relationship between the valid ratio and the probability of LPEs is displayed in Fig. 1c. Clearly, the valid ratio declines rapidly as the probability of LPEs increases. This condition offers an easy way to estimate the picking program's quality, i.e., a higher valid ratio indicates better pickings.
In-site explosion events. The data are obtained from a rock phosphorous ore mine located in Guizhou Province, China. After approximately 50 years of excavation, the mining depth has reached approximately 500-800 m below the ground surface. The high in situ stresses lead to a series of engineering problems, such as difficulties in the rock support of the main laneway and rock fall, spalling, slabbing and floor heaving in the permanent laneways after the installation of support 37 . A microseismic monitoring system including 26 single-component sensors and 2 three-component sensors are built to monitor microseismic events (Fig. 2a). We performed explosions of the emulsion explosive 6 times to test the proposed method. These explosion positions were measured as real sources. The frequencies of the received signals ranged from 0 to 200 Hz. The sampling frequency was set to 6000 Hz to cover the signals' frequency domain without distortion. In fact, many factors can affect the picking quality of arrival times in practical applications. For example, a 50 Hz power line interference can lead to LPEs both in P-wave picking and S-wave picking, especially for arrivals with low signal-to-noise ratios, as shown in Fig. 2b. To solve this problem, an LPE-tolerant location method was investigated.
Using the method proposed in this paper, we located the 6 explosion events conducted during August 20-22, 2014. For each event, we used both VFOM and traditional methods to obtain comparable results. The Quasi-Newton algorithm was employed as the searching process for all the location methods. The arrival times of the P-wave and the S-wave were picked manually to ensure a high picking quality. Furthermore, additional LPEs were added to a small part of the manually picked arrival times to simulate the picking deviation caused by automated programs. We set the P-wave and S-wave velocities to 5200 m·s −1 and 3300 m·s −1 , respectively, after attempting several times to obtain the best velocity structure for all of the location methods. The stopping criterion B (SC-B) was used to ensure that the VFOM could always locate the source. Table 1 shows the location errors for both the VFOM and traditional methods for the 6 explosion sources. As the location results from input data with LPEs are typically much worse than those from input data without LPEs, the methods using the L2 norm, i.e., DL2 and TL2, are obviously susceptible to LPEs. Compared to DL2 and TL2, DL1 and TL1 perform much better in terms of LPE tolerance, as the results obtained with LPEs were as accurate as those without LPEs using both P-and S-waves. However, these two methods are still too sensitive to LPEs in scenarios in which the picking of the S-wave is problematic and only P-wave arrival times are available. The VFOM stands out from these location methods because of its stable performance in both the P-based location and the PS-based location, as shown in Table 1. The location errors that use input data with LPEs are quite similar to those that use input data with LPEs using P arrival times only, making LPEs an unlikely cause of extra location errors. Regarding PS-based locations, the VFOM can locate at the same position with or without LPEs because the S-waves increase the number of hyperboloids intersecting at the source, which can make the location more stable. Additionally, we display the location errors for the 6 explosion events using two migration-based methods in Table 1. The picking-free migration-based method utilizes the passive kurtosis derivative waveform and the migration process to search for the source 17 . Because the passive kurtosis derivative waveform fails to clearly characterize the arrival time of the waveform for the signals, the result of the picking-free migration-based method was unsatisfactory in our case. We then manually generated a pulse at each arrival time to replace the passive kurtosis derivative waveform and execute the migration process. No significant improvement was achieved in the location result compared to the VFOM (shown in the last row of Table 1).
The stability of the VFOM under the wrong velocity model, e.g., arrival times of the S-wave are assigned P velocity, was also examined. In fact, the test has similar results when adding large delays (LPEs) into the arrival times of the P-wave, which confirms the superior location stability of the proposed method.
We used the jackknife method to estimate the algorithm stability and the location uncertainty of the VFOM. The TL1, the most stable of the traditional methods, was used for comparison (Table 1). We investigated the locations of the two location methods both with and without LPEs, and the results are shown in Fig. 3. The location uncertainty shows that the VFOM can always give stable results with or without LPEs, whereas the TL1 method faces considerable uncertainty when the number of sensors is small and the input data are contaminated by LPEs. For instance, in the VFOM, the uncertainty of E1 changes less than 10 m after the contamination of LPEs, whereas the uncertainty of TL1 increases from 36 m to 218 m. In reality, the in-site microseismic monitoring operation frequently encounters situations in which only a few sensors are triggered, especially for events with very low energy, and these low-energy events often generate signals with a low signal-to-noise ratio, which leads to LPEs. As a consequence, the VFOM is of great practical value for improving the applicability of microseismic monitoring systems. The results from the TL1 method, the best of the four traditional locator methods (see Table 1), are also displayed in the figure for comparison. The black and red circles represent the locations obtained by VFOM using arrival times with and without LPEs, respectively. Similarly, the black and red crosses are the locations obtained by the TL1 locator using arrival times with and without LPEs, respectively. The colour version of this figure is available only in the electronic edition.
Discussion
Many researchers have attempted to improve the solution by introducing a series of searching algorithms with better capacities 30,31 . These techniques achieve their purpose by keeping the searching process from converging to local solutions. However, LPEs dramatically change the objective functions of traditional methods. For example, local solutions may turn into the global solution after LPEs are added to the arrival times (Fig. 4a-d). In this case, one would fail to locate the source using any searching algorithm. Figure 4 shows the objective functions of For traditional locators, the secondary peak may take the place of the main peak after the LPEs are added to the arrival times, whereas this phenomenon rarely occurs with the VFOM.
both the VFOM and traditional methods for a 2D source location problem. The objective functions of traditional methods are significantly different before and after the input data are contaminated by LPEs; the global solution shifts from the source to somewhere far away. Moreover, LPE amplitude affects the shift: the larger the LPEs are, the more serious the shift. In contrast, the VFOM has an important advantage for avoiding this impact. As shown in Fig. 4e,f, LPEs remove only the related hyperboloids from the source, and they do not prevent the rest of the hyperboloids from intersecting at the source. This mechanism is why the VFOM can always obtain more stable solutions than traditional methods, even when using input data with LPEs. A predetermined velocity structure is typically needed for the majority of source location methods using arrival times. It is normal that some error exists between the measured and real velocities in the propagation medium, particularly in media containing fissures, e.g., rock mass. Therefore, we should pay more attention to the velocity sensitivity when a new location method is proposed. Using a series of velocities ranging from 4000 m·s −1 to 6000 m·s −1 , the location errors for a set of synthetic tests were collected. The results showed that the VFOM has a similar velocity sensitivity as traditional methods. It should be noted that all these location methods are less sensitive to velocity errors for the events in the sensor array compared with events out of the array. To obtain a suitable velocity, the production explosion events that are very common in the mining site can be utilized. The positions of these production explosions can be measured as the "real locations" to obtain the best velocity that minimizes the location error. Moreover, the velocity estimated in this way evolves over time in response to the velocity change caused by human activities such as mining. Developing location methods and related work without a pre-determined velocity is another approach to overcoming the issue of velocity errors, which is often treated as an unknown variable [38][39][40][41][42] . It will be important to introduce this idea into the VFOM in the future.
Two stopping criteria are available in the VFOM, i.e., SC-A and SC-B. SC-A tends to obtain more stable and accurate results, whereas SC-B ensures the success rate of the location program. Synthetic tests show that the errors of the majority of the results obtained by SC-A are smaller than 20 m. With regard to SC-B, although most of the errors are below 20 m, some locations have errors greater than 50 m. However, SC-A can only succeed for 40%-50% of events in the case of 20% LPEs, whereas SC-B is able to provide a location for all the events. In summary, both SC-A and SC-B have advantages and disadvantages, and they can serve as optional choices for practical applications in the presented work.
Methods
In mathematics, the problem of source location is concerned with solving an over-determined system of equations, i.e., more equations than unknowns 43,44 : where s i is the wave propagation path from the source to the i th triggered sensor; v is the velocity field in space; and t 0 and t i represent the event's original time and the arrival time of the wave phase (P-wave or S-wave) measured by the i th sensor, respectively. The system of the classical single-source in homogeneous medium problems is given by 0 are the coordinates of the source; t 0 and t i represent the event's original time and the arrival time of the wave phase (P-wave or S-wave) measured by the i th sensor, respectively; and v is the constant propagation velocity (v p or v s ). The most commonly used method to solve the system is to minimize the sum of the square differences between the left side and the right side. Thus, the location problem is transformed into optimizing an objective function. We can obtain the solution by searching the space and time for a point by minimizing the total residual. However, there is a fatal weakness in this method: when there are serious input errors in arrival times (i.e., LPEs, such as mis-picks and outliers), the solution will quite possibly be ruined because all the LPEs contribute to the final location 23,24 . This is an important reason why the sources located by many automated microseismic monitoring systems cannot agglomerate into larger clusters in space.
Instead of minimizing the total residuals between the theoretical and observed arrival times, the VFOM searches the space for the position through which the greatest number of hyperboloids pass (Fig. 5a). The hyperboloid described here is expressed by Theoretically, all these hyperboloids in a multi-sensor array should intersect at the source if the arrival times are accurate and the velocity structure is correct. The fact is, unfortunately, that different levels of errors, such as small systematic errors in arrival times, exist even if high-quality picking programs are used. These inevitable errors will cause the hyperboloids to swerve off the source. In consequence, it is difficult to find a point that is on all of the hyperboloids.
To prevent the small errors from misrepresenting the location, we introduce a spatially continuous function called closeness basis (CB) to measure the closeness between a point in space and a hyperboloid. Instead of indicating a spatial point on/off a hyperboloid, CB describes the closeness between a point and the corresponding hyperboloids by its value at the point. Generally, the source has a relatively high closeness to all the hyperboloids; thus, the sum of CBs at the source should reach a higher level than that at other positions. Therefore, by maximizing the sum of CBs, we can locate the source. Briefly, we describe the VFOM as follows: Scientific RepoRts | 6:19205 | DOI: 10.1038/srep19205 Step 1: Establish a CB for each sensor pair; its value increases when approaching the corresponding hyperboloid.
Step 2: Superimpose all the CBs to obtain a spatially continuous function, which is called the total closeness field (TCF).
Step 3: Search the space for the position by maximizing the TCF value with a standard optimization algorithm.
Thus, we find the position that is close to as many hyperboloids as possible.
Closeness basis.
The CB is an overall space function that is able to measure the closeness between a point and its corresponding hyperboloid. If we build a local coordinate system OXYZ with i th and j th sensors symmetrically on the Z-axis, we can rewrite the hyperboloid in a standard form as . Correspondingly, the exact position and shape of the hyperboloid change in a certain range (the shadowed area in Fig. 5b). Generally speaking, the closer to Equation. (8) a point is, the greater the likelihood that the point is the source. Here, we establish a virtual field (i.e., CB) to quantify the closeness and the possibility. In particular, a feasible CB is given by , denoting the Z-direction distance between a point ( , , ) X Y Z and the hyperboloid of Equation. (8), and σ is a constant controlling the shape of the CB (the determination of σ can be found in Fig. 5b). The value of the CB ranges from 0 to 1. With the CB's value increasing, the possibility that the source appears at the position increases.
CBs assemble into the TCF. To superimpose the CBs in a global system, the relation between local systems and the global system should first be established. In fact, the transformation from the local system OXYZ to the global system oxyz is the key step in building the TCF, and several rotations and transitions can achieve this transformation. For example, an available transformation consisting of two rotations and one translation is given by where R ij denotes a 3 × 3 matrix whose elements are determined by the position of the sensor pair. R ij combines a y-axis-based rotation and an x-axis-based rotation and is given by Figure 6 illustrates the TCFs of 2D arrays containing 4, 5, 6 and 8 triggered sensors. As shown in Fig. 6, the TCF becomes increasingly complex and the number of local optimum solutions (secondary intersections) increases with the rising number of sensors. The rising number of sensors also increases the number of hyperboloids intersecting at the source, which remains the main peak (i.e., the source) standing out against the secondary peaks. This characteristic provides convenience for distinguishing the global solution from local solutions, e.g., setting a threshold as the stopping criterion of the iteration process to estimate whether a solution found is the global solution or not.
Searching procedure. Many iterative techniques have been used to search for the solution in source location operations, including the grid search technique 19 , the differential evolution algorithm 30 and the genetic algorithm 31 . The most efficient optimization algorithms are unconstraint local algorithms that utilize the derivative information of the objective function, such as the Quasi-Newton method and nonlinear conjugate gradient methods. Moreover, these algorithms are typically in standard form and easy to obtain. Because of the continuity and differentiability of the TCF, we are able to use these standard optimization algorithms as the iterative technique in the VFOM procedure. In general, local optimization algorithms produce solutions that depend on the initial values used. To increase the stability of the solution, we select the best solution after running the iterative process multiple times with different initial values. Thus, stopping criteria are needed to break the iterations. Two stopping criteria are optionally adopted by the VFOM, i.e., stopping criterion A (SC-A) and stopping criterion B (SC-B). SC-A is a threshold-based criterion that stops the loop with a failure message rather than an unsatisfying solution when no objective value exceeding the threshold is found. A recommended threshold is given by n k n k n n 0 8 1 1 1 4 where k meets ( − )( − − )/ /( − ) > / n k n k n n 1 1 2 3 and n is the number of triggered sensors. The use of SC-A allows the VFOM to estimate the picking quality. The VFOM may refuse to give the result which is judged to be extremely unreliable when the arrival times contain too many LPEs (e.g., for the recommended threshold, the data with more than 33.3% arrival times containing LPEs will be considered as "unreliable"). Unlike SC-A, SC-B picks the best solution (with the largest objective value) from the results of multiple trials as its final location. The mechanism of SC-B ensures that the VFOM can always obtain the location.
Here we summarize the VFOM location process briefly. The VFOM location process includes three main parts: the initialization, the assembly and the stopping criteria. The assembly process is also divided into two parts: the basis process and the coordinate transform process. SC-A and SC-B are alternatives for the stopping criteria. By using the searching procedure described above, the VFOM can locate the sources quite efficiently. For example, the whole procedure takes less than 1 second even using a popular personal computer for hundreds of initialize-search processes.
Uncertainty estimation. The jackknife resampling method is applied to estimate the location uncertainty of the VFOM. The procedure involves repeated relocation, each time subsampling the data by deleting one station at a time. The jackknife method is also employed to estimate the stability of the proposed location method by other researchers 7,18 . The standard deviation of Euclidean distances among the locations obtained by the resampled stations of an event is taken as the uncertainties of the event.
Conclusions
We developed a novel method to locate single-source events from arrival times contaminated by LPEs. This approach, called VFOM, has been verified by both synthetic tests and in-site explosions. The numerical simulations of synthetic tests show that the VFOM is superior to traditional methods for known sources using input data containing different probabilities of LPEs. Furthermore, in the location of explosion events, the VFOM demonstrates its accuracy and stability with both P and P-S arrivals. Then, we discuss the LPE-tolerant mechanism of the proposed method, which is the resistance to the impact of LPEs on the objective function. The velocity sensitivity analysis shows that the VFOM has a similar sensitivity to velocity errors as traditional methods. In addition, we discuss the properties of the two optional stopping criteria suggested in this paper. The results reveal that the SC-A gives more accurate and stable results, whereas the SC-B ensures that every event is successfully located. The VFOM is suitable not only for local microseismic location but also for other passive location problems in a homogenous medium such as acoustic source localization. | 2018-04-03T02:06:45.267Z | 2016-01-12T00:00:00.000 | {
"year": 2016,
"sha1": "863d0b2092479c7a6e263501d8a213157226c71b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep19205.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "863d0b2092479c7a6e263501d8a213157226c71b",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
17240366 | pes2o/s2orc | v3-fos-license | Coupled Oscillators with Chemotaxis
A simple coupled oscillator system with chemotaxis is introduced to study morphogenesis of cellular slime molds. The model successfuly explains the migration of pseudoplasmodium which has been experimentally predicted to be lead by cells with higher intrinsic frequencies. Results obtained predict that its velocity attains its maximum value in the interface region between total locking and partial locking and also suggest possible roles played by partial synchrony during multicellular development.
§1. Introduction
Inspired by the collective behavior and rhythmicity of biological systems, synchronization of coupled limit-cycle oscillators with a frequency distribution has been studied using a system with all-to-all interaction in the following form, 1, 2, 3) where ω j is an intrinsic frequency of the oscillator j, W j is a complex variable and (˙) = d dt (). The system has served well for theoretical studies on frequency entrainment and critical behaviors in a system far from equilibrium. Despite its original aim, however, very little has been discussed on the system's applicability and a connection to biological systems.
Development of the cellular slime mold Dictyostelium discoideum provides us with an ideal example that needs to be investigated in this respect, since it's chemoattractant secretion exhibits limit-cycle oscillations in the vicinity of a Hopf bifurcation point 4) with a difference in intrinsic frequencies and synchronization. 5) These eukaryotic cells feed on bacteria and grow by binary fission. Deprived of food, they initiate aggregation by emitting cAMP as a chemoattractant while simultaneously making a directed motion against its surrounding gradient. Each aggregation territory consists of 10 3 to 10 5 amoebae that together form a spherical mound whose nipple like apex zone consists of differentiated prestalk cells. The tipped mound elongates vertically to form a slug like pseudoplasmodium which migrates to a suitable environment where it forms a fruiting body. 6,7) To study a self-organizing process of cells in such systems, one must understand the nature of collective behavior in a population of motile oscillators. In the following, we first introduce a system that incorporates chemotaxis into eq. (1.1) and present the system's overall be-havior. Then we discuss our results and their implications on development of a cellular slime mold. §2.
Equations
The system is derived from a linear diffusion equation for two chemical species denoted by a complex variable Z(r, t) and equations for chemotaxis of cell j at r = r j . They are where D is a diffusion constant,α is a chemotactic sensitivity coefficient. Cell j (= 1, 2, · · ·, N ) is represented by a region [0 ≤ |r − r j | ≤ r 0 ] on which a boundary condition representing the metabolism F of intracellular chemical species in a complex variable W j (t) is imposed. The boundary conditions are expressed as lim r→∞ Z(r, t) = 0, (2.3) Here ψ({W k }) is an abbreviation for the interaction term ψ({W 1 , W 2 , · · ·, W N }), and C is a constant background level of Z(r, t). Solving eq. (2.1) under N boundaries (2.4) would yield a field Z(r, t) in an integral equation (2.6) where the diffusion kernel Φ is either a Gaussian or Bessel-type function according to the dimension of r.
In order to simplify the system, let us assume Φ(|r(t)− r k (τ )|, t − τ ) → δ(t − τ ) in the limit of |r − r k | → 0. Assuming we had F that yields the Hopf bifurcation normal form under a mean-field coupling ψ({W k }) ∝ (Z(r j , t) − Z(r j + r 0 , t)); eqs.(2.2) and (2.5) could be rewritten as where ω j and λ independently determine the intrinsic frequency and the amplitude of the oscillation. In a weakly coupled regime, we could neglect the amplitude effect, therefore allowing eqs.(2.7) and (2.8) to be further reduced to a coupled phase model in the following form, where c is the real part of C. In the transformation from eq. (2.7) to eq. (2.9), we have fixed λ = 1. In addition, the deceleration term and a periodic change of chemotaxis sensitivity were added to our previous model. 9) Here, the sensitivity function γ and the short range interaction v d are defined by where β and κ are both positive constants. By imposing a boundary condition on each cell that is in the form of an ordinary differential equation, the spatially extended system in eqs. (2.1) and (2.2) is now reduced to a set of integro-differential equations that tracks the time development of those cell boundaries. Notice that eq. (2.9) provides us with a general scheme that incorporates spatial dependency in the well-studied phase model. 10,11) This is a novel approach to reactiondiffusion systems applicable to those that exhibit nonlinearity localized on boundaries. §3. Method For the sake of numerical analysis, the kernel is simplified down to a point where it is not distinguishable from an one dimensional kernel except by the factor of r0 r and a shift in the origin of the exponent. Furthermore, it is multiplied by a step function which roughly incorporates the effect of degradation of the chemoattractant by the enzyme in the extracellular substratum. Therefore ∆t could be considered as an average life span of the chemoattractant. The precise form used in the calculations is as follows: where Θ(t) denotes a Heaviside step function. Numerical studies on eqs. (2.9) and (2.10) were performed using the fourth-order Runge-Kutta method in addition to the semiopen formula 8) for the integral terms. Parameters D, r 0 and κ were fixed at unity throughout latter calculations. Other parameters are c = 2.0, ω j = 1.0 + (j − 1)∆ω, β = 0.01, ∆t = 2.0 unless stated otherwise.
We have employed constant initial values for the interval of [−∆t, 0], and all results were obtained from a fixed step size of h=0.01. Some calculations where N = 2 were checked for accuracy using h=0.001 and h=0.0001. §4. Migration in N = 2 We will first describe the simplest case where N = 2 to characterize attractors in the system. In order to carry out a steady-state approximation, chemotaxis will be confined to one dimension, and the adaptation will also not be considered (κ = 0). Suppose oscillators were entrained with a constant phase difference ψ = φ 1 − φ 2 . When x 2 − x 1 = 2r 0 = const., the equation describing the position of a centroid x c = (x 1 + x 2 )/2 could be approximated bẏ where we have neglected the second-and higher-order terms of ψ and had a change in variable from τ to τ ′ = t − τ . The mean cluster velocity v c =<ẋ c > therefore would be proportional to ψ. We see that v c is also proportional to ∆ω from the locking conditionψ = 0. Applying the same approximation as in eq. (4.1), where ω * is the entrained frequency. When the phase is locked, The parameter dependencies given here were confirmed by numerical simulations for N = 2 with κ = 1.0. For a sufficiently large α, the oscillator with larger ω j is advanced in phase and leads migration as one can see in Figs. 1 and 2. Not only α but ∆ω also increases the cluster velocity v c . Note that there is no net migration of a cluster when ∆ω = 0. Figure 2 also indicates that there is velocity proportional to α (but not to 1/ǫ) even when frequency is not entrained (ǫ = 0.1) due to the deceleration term which was not present in Aizawa-Kohyama (A-K) model discussed previously. 9) §5. Numerical Results for N = 20 In order to characterize coherence in the phase dynamics, an order parameter R = 1 N N j=1 e iφj is plotted against ǫ in Fig. 3. Since we are dealing with a very small number of N , |R| does not approach zero as ǫ → 0. Note that in Fig. 3, the minimum of |R| was plotted instead of, for example, < |R| > (<> denotes time averaging). A cluster shows a directed migration when oscillators are entrained into a common frequency. The instantaneous velocity of a centroid defined by r c = 1 N N j=1 r j is plotted against ǫ in Fig. 4. The apparent discontinuity at the onset of total entrainment suggests that the cluster velocity, too, may be taken as an order parameter.
The relation between the polarity of a cluster and the migration direction could be understood by introducing Z(θ) which is defined by where m k s are natural numbers {1, 2, 3, · · ·, N } representing the distance of oscillator j from a given point outside the cluster (x, y) = (r cos θ, r sin θ); 1 is assigned to the closest and 20 to the furthest oscillator. From the above definition, Z(θ) takes the maximum value in the direction θ (measured from the centroid) where oscillators with a smaller ω j are positioned. When ǫ and ∆t are small but large enough to synchronize oscillators, a cluster heads toward the direction where more oscillators with smaller ω j are located. The orientation reverses as both ǫ and ∆t are increased so as to make the coupling long range. §6. Discussion A simple coupled oscillator model of cell aggregates was derived from a linear diffusion equation with timedependent boundaries. The approximation in N = 2 and numerical analysis carried out for N = 20 revealed that synchrony in such a population of oscillators with chemotaxis results in migration as a whole. In addition to the migration direction that agrees with an experimental prediction, there are some implications from our present work concerning the possible order parameter.
We showed in Fig. 5 that in the case of N = 20, oscillators with larger ω j also lead the cluster translocation if ∆t is sufficiently large. In the light of the prediction from suspension experiments 5) that cells with higher frequency constitute the anterior of a slug, it may be concluded that cell to cell interaction is not local but rather long range. The opposite migratory direction predicted by a small ∆t suggests that a reverse flow such as the one exhibited by subpopulation of cells at the onset of culmination 12) could be realized without any secondary chemoattractant. It would be interesting to see whether cells make use of such effect caused by different coupling ranges. This could be controlled either by the extracellular enzyme concentration or by the surface receptor occupancy.
The relation between the coupling strength ǫ and the instantaneous cluster velocity has partly confirmed the result obtained by A-K model 9) with its peak centered around the critical region. Below ǫ = 7.5, v c continues to fluctuate with its direction not fixed which prevents us from obtaining its mean.
We have observed that in a partially synchronized state, locked oscillators emerge specifically in the anterior section of a cluster (data not shown) which implies that desynchronization plays a possibile role in cell-type differentiation. In general, f −µ fluctuations are to be observed in such interface region. A detailed analysis on such fluctuation and its effect will be provided elsewhere.
Due to the coupling and chemotaxis in the form of integral equation, we have confined our present report on results obtained in the case of N = 20 with fixed r 0 and D. The diffusion characteristic makes it difficult to vary the ratio r 2 0 /D since decreasing it requires smaller step size h and increasing it requires larger ∆t. Future work will also address the construction of a simpler scheme that makes the analysis more feasible. | 1998-06-28T12:28:24.000Z | 1998-06-28T00:00:00.000 | {
"year": 1998,
"sha1": "4b16a85173a8667a250ae21a356692b4e361ae84",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4b16a85173a8667a250ae21a356692b4e361ae84",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Biology",
"Chemistry"
]
} |
268316645 | pes2o/s2orc | v3-fos-license | Diversification of Fresh Asparagus Exports from Perú
This quantitative descriptive study examined the diversification of asparagus exports in Peru using the Herfindahl-Hirschman Index (HHI). The results show a clear dominance of the United States as the main export destination, although there was an annual decrease of 6.7% in 2022. The HHI revealed a growing trend towards greater market concentration in Peruvian asparagus exports. At the company level, despite a general decrease in the number of exporting companies, significant growth was observed in some of them, such as Agrícola Cerro Prieto and Agrovision. Based on these results, it is recommended that Peruvian companies seek further diversification of their export markets to minimize dependence on a single market and develop adaptable and resilient business strategies to cope with a constantly evolving market.
Introduction
International trade, a fundamental pillar of the global economy, is based on the theory of comparative advantage, which argues that countries benefit from specializing in the production and export of goods and services in which they have a relative advantage (Casanova & Zuaznábar, 2018) This principle, first formulated by David Ricardo in the 19th century, remains a cornerstone of contemporary international economics.Exports, as a major component of international trade, play a crucial role in economic growth and the development of countries (Walter, 2022).They enable countries to leverage their comparative advantages, generate employment, increase productivity, improve the trade balance, and even establish trade agreements (Ando et al., 2022;Kwark & Lim, 2020;Yoshimatsu, 2020).
In light of countries' concern to maintain their export hegemony, the alternative of diversifying their exports emerges, which entails expanding the range of exported goods and services, as well as diversifying export markets and even diversifying exporting companies (Canh & Thanh, 2022).
According to the product life cycle theory, diversifying exports can be an effective strategy for countries, as it allows them to advance their economic development and reduce their dependence on a limited number of products or markets (Swathi & Sridharan, 2022).Consequently, diversifying exports has multiple benefits, including reducing vulnerability to price fluctuations and market volatility, improving competitiveness, and promoting innovation and technological development (Gnangnon, 2022).However, it also represents challenges, such as the need to invest in new production capabilities, adapt to the norms and regulations of new markets, and manage the risks associated with entering unknown markets (Nguyen et al., 2022).
The diversification of exports and export markets has proven to be a successful approach for several countries, especially in the context of fruit exports (LaFevor, 2022).This diversification process can be an effective strategy for developing countries to advance their economic development and reduce their dependence on a limited number of products or markets (Vázquez, 2016).
For example, Mexico has diversified its fruit exports to Asia and Europe, in addition to its traditional market in the United States (Agosin & Chancí, 2015).This process has involved a series of strategies (Cardoso, 2018).Firstly, the country has leveraged its comparative advantages such as its favorable climate for fruit and asparagus production and its proximity to the markets of the United States and Canada.Secondly, the mentioned country invested in improving its production and export infrastructure, such as modernizing processing and packaging facilities and enhancing logistics and transportation systems (Blanco et al., 2020).Thirdly, it established trade agreements with several countries in Asia and Europe, facilitating access to these marketsv (Maya et al., 2011).
On the other hand, Spain has expanded its asparagus exports to Middle Eastern and North African countries (International Trade Center, 2023).This expansion has been driven by a series of factors: the growing demand for asparagus in these markets, the adaptation of asparagus varieties to Spain's climatic and agricultural conditions, and the improvement of the country's production and export capabilities (Pérez et al., 2022).Through these strategies, Spain has managed to diversify its export markets, reducing its dependence on European markets and increasing its economic resilience (Rosal, 2019).
In other countries, it is evident how diversifying exports can open new market opportunities and improve the economy of companies (Alkhathlan et al., 2020).However, the complexity and challenges associated with this process are also highlighted (Quiñonez et al., 2021).Diversifying exports requires a deep understanding of international markets, consumers, and non-tariff entry barriers.It also requires significant investment in new production capabilities and the improvement of export infrastructure (Li et al., 2022).
These experiences help us understand that countries can advance their economy through export diversification.However, it is also suggested that this process is influenced by a series of contextual factors, including countries' comparative advantages, international market conditions, government and companies' policies and strategies (Markakkaran & Sridharan, 2022).Additionally, the importance of Michael Porter's theory of competitive advantage is highlighted, as he states that companies and countries can enhance their competitiveness through innovation and improving their production and export capabilities (Vivoda, 2022).
The intricate architecture of international business dictates that corporations must navigate through multiple economic, political, and cultural environments, which implies a constant confrontation with diversity and uncertainty.Diversification of companies, this is, the strategic expansion of their product or service portfolio, is a crucial component to ensure sustainability and business resilience in this scenario of globalization (Uzundumlu et al., 2022).This tactic is used to mitigate the risks associated with excessive dependence on a single product line or specific market niche while leveraging opportunities for innovation and added value creation.Diversification involves conscious and systematic exploration of new business areas, vertical or horizontal integration, and capturing inter-company synergies (Swathi & Sridharan, 2022).All of this can be enhanced by aligning with the existing capabilities of the company, identifying market gaps, and orienting towards future industry trends.
On the other hand, market diversification, the act of introducing and promoting products or services in multiple geographic markets, is a strategic imperative that can drive a company's growth and global competitiveness (Zhou & Tong, 2022).This process involves careful assessment of market suitability, cultural adaptability, entry and exit barriers, competitive dynamics, regulatory trends, and partnership or acquisition opportunities.Similarly, it requires a well-articulated differentiation and positioning strategy that fits the specific characteristics of each market, resulting in product adaptation, pricing, placement, and promotion according to the local market environment (Siddiqui & Afzal, 2022).Finally, it is not only a measure to dissipate risks but also a way to access new consumers, leverage economies of scale, explore learning opportunities, and maximize competitive advantage in the international arena (Cervera & Compés, 2018).
Asparagus exports have experienced significant growth worldwide in recent years.According to the Food and Agriculture Organization of the United Nations, (2023), in 2022, the global trade of fresh asparagus exceeded 2.5 billion dollars.Countries such as Peru, Mexico, China, and Spain stand out as the main exporters, while the United States, Canada, and Germany are among the major importers.
In the highly competitive and globalized economy of the present, the diversification of Peru's important product exports becomes a crucial strategy to increase the competitiveness of the Peruvian agricultural sector (Montes, Pantaleón, et al., 2023).This diversification, understood as the process of expanding the sales of flagship products to a wider range of international markets (Montes, Pantaleon, et al., 2023), involves a meticulous evaluation of global demand, consumer preferences, health regulations, as well as tariff and non-tariff barriers that could affect the introduction of many Peruvian products into new markets (Yllescas-Rodríguez et al., 2021).In this context, it is vital to consider bilateral and multilateral trade agreements, such as free trade agreements, that can facilitate market access and reduce transaction costs (Barrientos-Felipa & Motta, 2020).Through export diversification, the goal is not only to mitigate risks associated with dependence on a limited number of markets but also to seize opportunities to increase profitability and value-added of Peruvian products, thus strengthening Peru's position as a relevant player in international trade (Escalante et al., 2022).
Based on the aforementioned, the aim is to conduct a study that describes the diversification of Peru's asparagus exports through the diversification of its markets and companies, as it is of vital importance to support sustainable growth in this agricultural industry.In this context, the proposal of a predictive model for asparagus export is added, becoming a crucial instrument to anticipate market trends, assess probabilities of success, and minimize risks associated with diversification.
Background
There is a study of the Mexican case in diversification of exports of mango, which depend highly on the American market.The authors of this investigation propose Japan as a new destination for the fruit, and also the implementation of added value to the fruit, in order to have new presentations derived from the product, other than the raw extracted mango (Maya et al., 2011).
Another investigation was carried out to identify the relationship between the diversification of the exports and exports performance, basing in the case of Spain as an exporting country, analyzing its products on a broadly basis.The results of this paper conclude that there is a direct relationship between both variables (Pérez et al., 2022).
Saudi Arabia is also considered in a study where it is determined that the exports are mainly concentrated in the oil, product that also is matter of concentration within the country production.The authors also make a causalities analyses that involve the export concentration of this country (Rosal, 2019).
Ecuadorian mango also became a matter of investigation.A research about the geographical diversification of the exports of this product was made in the period 2016-2020.The results show that the most part of the shipments of mango are destined to United States.It is concluded that the public sector has a relationship with this high degree of dependency on the American market (Quiñonez et al., 2021).
A study that involved 101 countries was carried out, in which the product diversification was analyzed and its effect on the economic growth, basing on the gross domestic product per capita.The developed countries trend to diversify more their products, it is recommended that they focus on their more productive ones, whilst the developing countries show big efforts in diversifying their products to ameliorate their situation (Markakkaran & Sridharan, 2022).
The liquefied natural gas is another object of analysis in a study where the diversification of Australia, Qatar, United States, Russia and Malaysia (the main exporters of this product) is analyzed.The results demonstrate that all these countries have made efforts to diversify their markets.Another activity made in this study is that the researchers proposed different ways of diversification, from which the geographical realm and the price are relevant to analyze (Vivoda, 2022).
The asparagus industry in Peru stands as a foundational pillar within the country's agro-export sector, playing a pivotal role in the national economic dynamics through its significant contribution to foreign exchange earnings and job creation (Pairazaman, 2023).Historically, the cultivation of asparagus in Peru traces back several decades, but it was not until the implementation of economic liberalization policies and export promotion strategies in the 1990s that the industry witnessed exponential growth, elevating Peru to one of the world's foremost asparagus exporters (Quispe, 2023).This growth was underpinned by the adoption of innovative agricultural practices, investment in irrigation technology, and leveraging the favorable climatic conditions the country offers, enabling year-round harvesting (Mori & Castillo, 2023).
Key players in the industry range from small producers to large agro-exporters, including associations, governmental entities, and sector support organizations, collectively forging a robust and internationally competitive value chain (Avalos, 2023).However, the industry faces significant challenges in its endeavor to diversify export markets.These challenges encompass adapting to international quality and sustainability standards, the volatility of international prices, trade and tariff barriers, and the need for greater innovation and added value in the products offered (Lévano, 2023).Moreover, climate change emerges as a latent challenge, threatening the sustainability of the essential water resource for crop irrigation, thereby jeopardizing the sector's productive capacity.
Overcoming these obstacles necessitates integrated strategies that involve the continuous improvement of production processes, market diversification through the exploration of specific niches, and the promotion of sustainable agricultural practices (Chavez, 2023).Furthermore, the strengthening of strategic alliances between the public and private sectors is essential to enhance access to technologies, financing, and international markets.In this context, the Peruvian asparagus industry faces the imperative need to adapt and continually evolve to maintain its competitiveness on the global stage, thereby ensuring its long-term contribution to Peru's economic and social development.In the realm of exports, Peru has demonstrated remarkable progress characterized by sustainability and innovation (Montes, et al., 2023).
Theoretical Bases
It is known that the international trade is an important base of the economy.The commerce comprehends goods and services (Jerzy & Oleksandr, 2022).International trade is underpinned by the theory of the comparative advantage, which indicated that the nations will trade the goods in which they will be more efficient (Halkos et al., 2021).Efficiency is not the only condition to make productive international business, because there has to be a guarantee of the continuity of the economic development.Following this idea, it is necessary to not to focus in a single market for exports.It is mandatory to adopt diversification and not only for markets, but for products.The agricultural sector, in this way, has a challenge in adopting added-value chains for their products, too (Canh & Thanh, 2022;Constantin et al., 2023).
The theory of the export-driven economic development vis-à-vis diversification bases on the premise that the diversification of products by the enterprises will reduce the degree of exposition of the company to economic fluctuations.In addition, product diversification endows a business with technology and opportunities of longer terms with their customers (Abdullahi et al., 2021;Damijan et al., 2020;Lemessa et al., 2018).
Although diversification has benefits for economy and business, there are countries that are limited in resources.In that case, there is a limitation to diversification and the primary produce is the option, alongside other sectors such as tourism.In these countries, there is also a high degree of vulnerability to outer factors (Amare et al., 2019;Ben et al., 2022;Brummitt et al., 2020;Hodey et al., 2015;McIntyre et al., 2018;Parteka & Tamberi, 2013;Tabash et al., 2022).
On diversification of exports, there are two currents: the market destination diversification theory and the exporting entity destination theory.The first one states that the more markets a country makes business with, the better this country deals with the risks associated to have only one destination and have a high degree of dependence on it (Fassio, 2018;Jaffee, 2023;Maertens & Swinnen, 2009;Quiñónez et al., 2021).Market diversification implies the adoption of business strategies, such as research and development [8,32].
On the other hand, export entity diversification bases on the premise that a large number of export enterprises are more convenient than a small number of them, in the matter of facing risks linked to enterprise dependence in the economy of a country (Porter, 2008).Conditions to promote company diversification are policies for creating productive business, technology investments, creation of human capabilities to run and operate in these companies, efficiency in the production operations and the implementation of adequate infrastructures for these ends.Another benefit of relying in a plethora of enterprises is that the country resists better outer threats, such as the fluctuations of the price of the product (Barney, 1991;Karahan, 2017).
Materials and Methods
A quantitative, descriptive, and non-experimental research methodology was employed to examine the phenomenon of asparagus export diversification in Peru.This approach involved systematic collection of quantitative data that were used to describe observable characteristics, patterns, and trends in a specific context, without manipulating the variables of interest or creating controlled experimental conditions.Through this non-experimental study, authenticity and reliability of observations were ensured, providing an objective and rigorous snapshot of the state of asparagus trade without intervening or altering the natural conditions of this market, just as in other investigations related to agricultural products from Peru (Montes et al., 2023) To determine the degree of asparagus export diversification, the Herfindahl-Hirschman Index (HHI) was used.This widely accepted measure in economics and international trade to quantify market, companies and goods concentration and, therefore, diversification.The index was calculated by summing the squares of the market shares of individual companies or countries in a specific market.The interpretation of the HHI is the following: if 1500 ≤ HHI ≤ 2500, it indicated a moderate concentration; and if HHI ≥ 2500, the concentration is considered high (Department of Justice -The United States, 2018).
The national subheading analyzed was 0709200000, corresponding to fresh or refrigerated asparagus.This analysis was based on records from Peru's customs declarations in their web portal, which provided a comprehensive and up-to-date database of export operations.Thus, the study population consisted of all data recorded by exporting companies during the period starting in 2018 and finishing in 2022.Therefore, exhaustive coverage of relevant commercial transactions was ensured.Likewise, it allowed a detailed analysis of trends, volumes, destinations, and companies involved in asparagus exports.
Results
Data on world importations of asparagus can be seen in Table 1.Centering around the share of the import countries, Germany led the international purchases in 2018.Since 2019 and until 2022, the American market was at the top of the list, having 63.8% of share in 2022.The following countries of destination have smaller portions, but still relevant: they are Canada (5.3%), Germany (4.8%), Spain (3.2%), France (2.9%), United Kingdom (2.8%) and the Netherlands (2.7%).About the average growth of the imports in the period of study, the only country that had a positive average rate was United States, with 1.5%.Other than this, the market with the smallest negative average rate was Belgium with -0.1% and the destination of the biggest average decrease was the Japanese one, with 9.8%.
The maximum annual growth rates are also important in this analysis.The Netherlands, Belgium and Spain have the largest annual growths, being 31.3%, 26.5% and 20% respectively, all rates given in 2021.Other information that is relevant is the minimum annual growth rate.Germany experienced the biggest annual decrease in 2022 (28.5%), whilst Belgium suffered a slowdown of 27.3% in the same year, in which France had a reduction of its imports in 23.8%.The smallest decrease came from the American market in 2022 (12.8%).In Table 2, the exports of asparagus by market can be analyzed.In 2022, the United States dominated as the main destination with a contribution of 65.9% of the total, despite an annual decrease of 6.7% compared to the 261.4 million USD in 2021.The higher point of the American market in the imports of this product was given in the year 2020, with 262.1 million USD and its lowest year was 2018, importing 227.1 million USD.It is also remarkable that the highest growth of the asparagus imports in the American market is in 2019, with 11.1%, opposite to the decrease of 6.7% this country had in 2022 regarding 2021.During the study period, the average annual growth rate of exports to the United States was 2%.
The average The Netherlands, the United Kingdom and Spain, contributing 35.4,32.2 and 30.5 million USD respectively, represented 9.5%, 8.7% and 8.2% of the total exports in 2022, with a decreasing trend compared to the previous year.The average rise rates of these countries are -2.2%(Dutch market), -6% (British destination) and 2% in Spain, being this last country the only one among these three in having had a growth in its international purchases of Peruvian asparagus.Comparing Netherlands, United Kingdom and Spain, the market with the highest annual import value in the last five years was the British one, with 42.1 million USD in 2018; and the country with the lowest imported in a year the Spanish one, with 25.5 million USD in 2020.
Other information can be mentioned: on average growth rates, Canada has the highest one with 31% and Brazil is in the bottom with 10.9%.It is also important to focus on France as the market with the lowest positive ratio (0.2%) and Mexico with the highest negative percentage of growth (-0.6%).Just as the Canadian market is the top in average growth, it is also the one that is at the peak if analyzing the maximum growth rates of all countries (160.4% in 2019).Brazil, as it is the market at the bottom in average growth, is also the country with the lowest maximum growth rate (8.3% in 2021).The Brazilian destination is also the country with the lowest minimum shift rate (-51.4% in 2020).Belgium has the highest minimum variation percentage in a year (-3.7% in 2022).
Regarding total asparagus exports, there was a decrease of 7.3% in 2022 compared to 2021, with an average annual growth of -0.3% during the 2018-2022 period.The year with the biggest export value is 2019 (400.2 million USD), year in which the Peruvian asparagus had its highest growth rate (6%) and the littlest value can be found in 2022 (370.2 million USD), being this also the year with the least rate of variation of exports (-7.3%).The examination of the data of Table 3 reveals an increase in the HHI from 3929 in 2018 to 4584 in 2022, which indicates a growing trend towards greater market concentration in Peruvian asparagus exports.The peak market concentration occurred in 2020 with an HHI of 4920 (year with the highest growth rate of 16.4%), followed by a slight decrease of 7.8% to 4534 points of HHI in 2021, this before another increase to 4584 in 2022.The rise of 2022 meant the smallest positive ratio of variation of this index (1.1%).Analyzing the average growth, is was 4.3% in the years of study.The market concentration is high, and there is a trend towards a bigger concentration in few destinations of exports of the product of investigation.The array of data about exports by enterprises is shown in Table 4.In 2022, Danper Trujillo led the sales abroad with an approximate representation of 10.8% of total exports, experiencing an annual growth of 34% since 2021.Complejo Agroindustrial Beta, although slightly decreased to 33.4 million USD in 2022 compared to the 33.9 million generated in 2021, maintained a 9% share in relation to the total exports of that year.
Companies such as Agricola Cerro Prieto and Agrovision Peru showed positive average growth during the study period, with average growth rates of 28.4% and 194.7% respectively.On the other hand, Sociedad Agricola Drokasa, Kimsa Fresh, Floridablanca, Agro Paracas and Santa Sofia del Sur experienced a decrease in their exports in 2022 compared to the previous year, being Agro Paracas the enterprise which experienced the biggest decrease (24.7%) and Floridablanca the one that obtained the least aggressive slowdown (2.9%).
The conglomerate of other companies decreased its share from 218.3 million USD in 2021 to 179.9 million in 2022, a decrease of 17.6%.In total, exports decreased from 399.4 million USD in 2021 to 370.2 million USD in 2022, representing a decrease of 7.3%.
Danper Trujillo is a company that had its highest point of exports in 2018 (45.3 million USD).Its average growth rate is -1.3%, being unfavorable but not as the cases of Santa Sofía del Sur (-4.4%),Sociedad Agricola Drokasa (-5.2%) and Complejo Agroindustrial Beta, which suffered the lowest growth in the period of study (-8.4%).Although the problem of the latter enterprise mentioned, it has been the leader in exports in the years from 2018 until 2021.
There are also companies that had a very healthy situation in the period of analysis.Looking at their average growth percentages, they are Agrovision Peru (194.7%),Kimsa Fresh (161.8%),TWF S.A. (79.7%) and Agricola Cerro Prieto (28.4%).These enterprises exported their biggest values in 2022, except for Kimsa Fresh, which did it in 2021.About the last company mentioned, it can be noted that it had no operations in 2018 and 2019, and it began its shippings in 2020.The year 2019 represented the biggest annual growth for Agricola Cerro Prieto, Agrovision Peru and TWF S.A., with 74.9%, 613.2% and 206.1% respectively.
Focusing on decreases, Kimsa Fresh had a very problematic year in 2020, with an annual rate of -30.6%, this being the lowest among all companies mentioned in Table 3.Besides this company, Agro Paracas (-24.7%) and Floridablanca (-23.9%)experienced considerable decreases in their exports in 2022 and 2020, respectively.The least decrease was the one obtained by Kimsa Fresh in 2022 (3.8%) and the highest positive minimum growth is the one of Agricola Cerro Prieto, with a rate of 7.9%.It is necessary to add that this company hasn't had decreases in its exports during the period of study.
266
The Herfindalh-Hirschman Index in concentration of asparagus exporting companies can be seen in Table 5.Overall, there is a decreasing trend in market concentration, from an HHI of 503 in 2018 to a minimum of 348 in 2021.However, the index experienced an increase of 17.7% to 410 in 2022, which means a higher concentration of exports in few companies.The average growth of this index is -4.2%.The general decrease in HHI suggests an increase in competition in Peru's asparagus export market during the period from 2018 to 2021.According to the information provided by Table 6, the biggest exporting region of asparagus in Peru in 2022 was La Libertad with 150 million USD and a share of 40.7%, but it has to be highlighted that Ica was at the top on the previous years (from 2018 to 2021), with an average share of 42.4% during these years.La Libertad has experienced an average growth of 4.7% between 2018 and 2022, with a maximum annual growth of 18.4% in 2019 and a minimum annual growth of -0.9% in 2021.On the other side, Ica had a different behavior on its exports, with an average growth of -8.7% in the period of analysis, being 1.4% its maximum annual growth in the year 2021 and -18.2% its minimum annual shift rate in 2022.
There is also a cluster of regions that can be mentioned, and these are Lambayeque, Lima and Ancash, gathering together an average share of 19.8% in the period of this study.Also Callao could be included in this cluster, but only between 2018 and 2020.In this group formed by three regions, Lambayeque has relevance with an average growth of 38%, being its annual maximum 132.6% in 2019 and its annual minimum -6.6% in 2022.
Focusing in other regions, Piura had a very notable change, with an average growth rate of 696.2%, its maximum annual shift rate occurred in 2021 (2048.4%).Meanwhile, the regions of Huancavelica, Amazonas and Cajamarca were very affected in their exports between 2018 and 2022, having average growth rates of -100%, -100% and -93.7% respectively.Analyzing the Herfindahl-Hirshman indexes centered in the exporting regions of Peru (see Table 7), there is a clear concentration of exports in a few regions or one region, although there is also a slight decrease in this concentration.The average shift rate during the period of analysis was -3.5%, being www.richtmann.orgVol 14 No 2 March 2024 267 2020 the year with the lowest annual variation (-5.6%) and 2022 with the highest growth rate (-1.4%).The data of the Table 8 shows an increase in the geographic diversification of asparagus export markets, with the number of recipient countries growing from 39 in 2018 to 42 in 2022, implying a cumulative growth of 7.5% and an average growth of 1.88% over the five-year period.On the other hand, a downward trend in diversification at the level of exporting companies was observed, with a decline from 92 companies in 2018 to 77 in 2022, representing a cumulative decrease of 17.38% and an average shortening of 4.35%.Other relevant data to be considered is the fluctuation of the exporting regions.The year with the biggest number of regions was 2021 (11 regions), and the bottom period was 2019 (9 regions).There was a cumulative change growth of 2.02% and an average growth of 0.51% between 2018 and 2022.
In export volumes, there was an inter-annual variation, with a peak of 136.0 million kg in 2021 and a minimum of 128.0 million kg in 2020.The fluctuation of the export quantity led to a cumulative growth rate of -2.03% and an average growth rate of -0.51%.The average export volume during the five-year period was 132.12 thousand tons.Regarding FOB value, a variation over time was detected, with a peak of 400.2 million USD in 2019 and a minimum of 370.2 million USD in 2022.The average FOB value during the five-year period was 388.1 million USD, fluctuating with a cumulative growth rate of -1.28% and an average growth rate of -0.32%.
Discussion
The decrease in Peru's asparagus exports in 2022 compared to the previous year reflects common fluctuations in international agricultural markets, where factors such as weather, pests, and global economic conditions can affect production and demand (Karahan, 2017).However, the downward trend observed in several key markets, especially the United States, the Netherlands, and the United Kingdom, may also suggest specific challenges in those destinations.
Regarding the high dependence on the US market, it is relevant to note that several studies Fassio (2018) and Jaffee (2023) have shown that market diversification can help mitigate the risks associated with over-dependence on a single market.
The increasing trend of market concentration, indicated by the rise in HHI, may also imply risks.Maertens & Swinnen (2009) suggests that greater market concentration can lead to higher vulnerability to demand fluctuations in those markets and changes in import policies.
At the business level, the increase in company diversification, suggested by the decrease in HHI, highlights the importance of competitive strategies.According to Elumalai & Kumar (2023), companies can gain sustainable competitive advantages through innovation and improving efficiency in their operations.
In addition, the high degree of dependence on the American market in the Peruvian asparagus resembles the case of the Mexican mango, where a need of diversification of markets and added value to the products is highlighted (Maya et al., 2011).As this background study proposes Japan as an alternative to diversify destinations for Mexico, in Peru there are also promising countries to consider as importers, such as Belgium and Canada, that have had considerable growth rates in their imports.
In Spain, it was determined that there is a direct relationship between export diversification and export performance (Rosal, 2019).Hence, it can be said that export concentration leads to a decrease in this export performance, reflected in Peruvian asparagus exports.In this case, it can be seen that the HHI of the markets has increased between 2018 and 2022, and in the same time the exports have suffered a decrease, so this direct relationship can be confirmed.
Another case study was carried out in Ecuador with the mangoes; where, as in Peru, there is a big concentration in the American market (Quiñonez et al., 2021).In the Ecuadorian case, this high level of market concentration is due to public sector limitation, but in the case of Peru, there is a task consisting in researching which could be the possible drivers that lead to the concentration of asparagus exports in United States as a major destination.
It was also demonstrated that developing countries trend more to diversify their products for export (Markakkaran & Sridharan, 2022).Taking this study into account, the product diversification could be a favorable starting point for Peru in asparagus, adding value to this good and not limiting in the raw, primary produce.This study agrees with the Mexican one that suggests product transformation to be more competitive in destinations such as Japan (Maya et al., 2011).
It is necessary to remember that the market diversification theory states that the more clients the business relies on, the less exposed the country is to the risks of having few markets (Fassio, 2018;Maertens & Swinnen, 2009;Quiñonez et al., 2021).Although the number of importing countries has grown between 2018 and 2022, the concentration of the exports volume -although moderate -has also grown, so there is no harmony between both aspects.Also, it can be underscored that market diversification reduces the risks of depending in only one country of destination.In this case, Peru has a very high degree of dependence in the American market, and its risk is high if there are market crises associated to the United Stated, being this the only destination it relies on.It is also said that R&D leads to a healthier market diversification (Gnangnon, 2022;Yllescas-Rodríguez et al., 2021), so that is something Peruvian companies could do and something Peruvian government could impulse.
About export entity diversification theory, it is known that the more companies a country is endowed with, the better situation it has in front of risks of depending in only one enterprise and the market risks, such as price fluctuations (Barney, 1991;Karahan, 2017;Li et al., 2022;Porter, 2008).Peru has a low concentration of enterprises for asparagus exports, and this concentration has even decreased, so this is a very convenient situation.What is not consistent with this information is the fact that the number of export companies has increased from 92 in 2018 to 77 in 2022, but this not means that the share of the enterprises is more spread in the last years.It is also important to remark that the diversification of exporting companies involves a series of conditions: export business creation policies, technology, efficiency in production and infrastructure (Li et al., 2022).The decrease of HHI means that these conditions have been complied in the period of analysis, and this is a factor that benefits the exports and decreases the risks of Peru on relying in just one exporting company and also it reduces Peru's vulnerability to external threats in the international market.
The decline in the number of exporting companies may reflect barriers to entry in the international market.According to Rugman & Verbeke (2008), companies may face various challenges in internationalizing: lack of knowledge about foreign markets, tariff and non-tariff barriers, and difficulty in adapting to cultural and regulatory differences.
Finally, the topic of the exporting regions has to be addressed.The number of regions fluctuated in the years of analysis, but the slight trend to diversification of these regions has also merit, based on the HHI.It does not mean that there is not region concentration at all; in fact, La Libertad and Ica are the zones where the exports of asparagus are very focused, having an average of 78.2% of the share of exports between 2018 and 2022.It has to be remembered that the less dependence of few exporting or importing markets, the less risks the company is exposed to Gnangnon (2022).Also, this situation reflects the need of endowing the country with efficient business in other regions to promote this kind of diversification (Li et al., 2022)
Conclusions
The dynamics of Peru's asparagus export market show a clear dominance of the United States as the main destination for Peruvian exports during the 2018-2022 period.Despite a decrease in exports in 2022, the United States continues to represent the largest proportion of total asparagus exports.However, the decline in exports to this country and the negative average annual growth during this period suggest the need for further diversification of export markets to minimize reliance on a single market.
The increase in the Herfindahl-Hirschman Index (HHI) from 3929 in 2018 to 4584 in 2022 indicates a growing concentration in the destination market for asparagus exports.This increase in market concentration may be a sign of decreased competition in the target markets, which can have implications for the stability and sustainability of Peru's asparagus exports.
At the company level, Danper Trujillo and Complejo Agroindustrial Beta lead Peruvian asparagus exports, but notable growth has also been observed in companies like Agrícola Cerro Prieto and Agrovision Perú.However, the overall downward trend of HHI from 503 in 2018 to 410 in 2022 suggests increased competition among asparagus exporting companies.This, combined with the decrease in the number of exporting companies from 92 in 2018 to 77 in 2022, indicates a constantly evolving market and the need for adaptable and resilient business strategies.
The topic of the regions is also relevant in this study.La Libertad and Ica concentrate the exports of asparagus in Peru between 2018 and 2022, and though this concentration has been less in time, with an HHI of 3503 in the first year of analysis and 3032 points in the last year.
Finally, a fundamental limitation lies in the reliance on data from customs declarations and public records, which, although providing an extensive and up-to-date database, may be subject to inherent biases or reporting errors.Furthermore, the use of the HHI index to assess market concentration and diversification, while widely accepted and used in economic and international trade analyses, is based on assumptions that may not capture the full complexity of the global asparagus market, especially regarding competition dynamics, the entry of new competitors, and the fluctuation of market shares.This methodological approach may not fully reflect market and company diversification strategies nor the effects of emerging trade and economic policies.These limitations underscore the need to interpret results with caution and consider the incorporation of additional qualitative methodologies in future research to gain a more holistic and nuanced understanding of diversification in Peru's asparagus exports.
Recommendations
Given the dependence on the US market, it is recommended to diversify export destinations.In this regard, exploring emerging markets or markets with growing demand for asparagus, such as Canada, Belgium and countries in East Asia or the Middle East, could be beneficial.Additionally, marketing strategies in existing markets could be redesigned to increase market share in countries like the United Kingdom or the Netherlands, which are already significant destinations but experiencing a decrease in demand.Due to the increasing market concentration, as indicated by the rise in HHI, it could be advantageous to develop strategies for product diversification.This could involve expanding into related products, such as processed or value-added foods that utilize asparagus, allowing for a broader reach in the market.
At the company level, the increase in competition highlights the importance of innovation and efficiency in operations.Companies could benefit from implementing advanced production technologies, improving logistics and supply chain processes, and investing in employee training, development, and infrastructure.Finally, given the decrease in the number of exporting companies, it would be beneficial for Peruvian authorities to provide support and resources to emerging enterprises.This could involve initiatives for financing, export counseling, and training in international business.Additionally, promoting collaborations and partnerships among companies could lead to greater resilience and adaptability in Peru's asparagus export sector.
In the regions factor, it is recommended to develop business in other regions than La Libertad and Ica to not to depend in those two and generate exposure to market risks.Lambayeque, Lima, and Ancash were demonstrated to be potential regions to grow.
Table 1 .
World demand of asparagus (Imports in thousands of tons).Includes reporting and non-reporting countries, in addition to estimations obtained by the International Trade Center and the United Nations Statistics Division.Data extracted from Trademap.
Table 2 .
Asparagus exports from Peru by destination market in FOB (Million USD)
Table 3 .
HHI of destination countries
Table 4 .
Peru's asparagus exports by companies in FOB (Million USD) www.
Table 6 .
Exports by department of Peru in FOB (million USD) | 2024-03-11T16:22:30.229Z | 2024-03-05T00:00:00.000 | {
"year": 2024,
"sha1": "f391042bcf91c1562e213ca2825b2fa75f92ca82",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/jesr/article/download/13740/13298",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6c7464b57570a7dd7a378235883f21db3fd980ee",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
222149133 | pes2o/s2orc | v3-fos-license | Why Accountability Sharing in Health Care Organizational Cultures Means Patients Are Probably Safer.
Because human errors should be regarded as expected events, health care organizations should routinize processes aimed at human error prevention, limit negative consequences when human errors do occur, and support and educate those who have erred. A just culture perspective suggests that responding punitively to those who err should be reserved for those who have willfully and irremediably caused harm, because punishment creates blame-based workplace cultures that deter error reporting, which makes patients less safe.
A Case of One Kind of Medication Error Despite their best conscientious efforts, physicians and other health care clinicians will inevitably make mistakes by omission, commission, or simply as a result of human nature and imperfections of work environments. A recent case from Tennessee highlights an example of medication error and can serve as the basis of an analysis of accountability in health care. The facts of the case are as follows: due to claustrophobia, an elderly patient who was anxious about a scheduled positron emission tomography (PET) scan was prescribed midazolam hydrochloride to help her feel more at ease. 1 This patient's nurse proceeded to retrieve the drug from an automatic dispensing cabinet. The dispenser's override feature enabled the nurse to select the first drug result displayed, 1 dismiss a series of 5 pop-up warnings, and withdraw the selected (wrong) drug-a paralyzing agent-from the cabinet. 2 The nurse removed a vial labeled with a paralysis warning from the cabinet dispenser, delivered it to the radiology department where the patient's PET scan was about to occur, and administered the drug to the patient via injection as directed. Thirty minutes later, the patient was found in cardiac arrest. Although the patient was resuscitated and transferred to an intensive care unit, clinicians deemed the patient unlikely to recover and the patient's family agreed another resuscitation attempt would not be appropriate. The patient was extubated and died shortly thereafter. 1 Codes and Cultures When analyzing this case of medication error, 2 organizations' codes of ethics can be drawn on to illuminate key features of organizational cultures in health care that inform what might be an appropriate response. For example, The Code of Ethics for Nurses states: "[W]hile ensuring that nurses are held accountable for individual practice, errors should be corrected or remediated, and disciplinary action taken only if warranted." 3 Responding punitively to nurses who err, such as terminating their employment or charging them criminally, might not be warranted because the American Nurses Association believes that "[C]riminalization of medical errors could have a chilling effect on reporting and process improvement." 4 The American Medical Association's Code of Medical Ethics Opinion 8.6, "Promoting Patient Safety," emphasizes both individual and collective accountability for errors. Physicians, who are "uniquely positioned to have a comprehensive view of the care patients receive," should "strive to ensure patient safety" and additionally "play a central role in identifying, reducing, and preventing medical errors." 5 Opinion 8.6 further states: "Both as individuals and collectively as a profession, physicians should support a positive culture of patient safety, including compassion for peers who have been involved in a medical error." 5 Each of these organizations' code statements underscores the importance of viewing any clinician action, including an error, in light of the social and cultural context in which that action was carried out.
Just Culture
Just culture offers a model for creating positive workplaces in health care settings 6,7 by balancing "the need for an open and honest reporting environment with the end of a quality learning environment and culture." 7 Its premises echo conclusions from the Institute of Medicine's 1999 report, To Err is Human: Building a Safer Health System, 8 which found that most medical errors arise from "faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them" rather than from reckless actions by individuals working within those systems. 9 As a result, the just culture model serves as a guide for health care systems and institutions by incorporating elements such as human factor design, error prevention, and steps to contain errors' consequences before they become critical. Its goals are to create a fair and open environment to promote learning, support the design and implementation of safety systems, and guide behavioral choices.
Although a just culture framework views adverse outcome events as opportunities to understand any contributing risks and how to mitigate them, it is not blame free. A just culture framework endeavors to balance 3 basic duties-to avoid causing unjustified risk or harm, to produce desired outcomes, and to follow procedural rules-against shared organizational and individual values of dignity, safety, equity, cost, and effectiveness. 6,7 Under the just culture framework, medical mistakes, such as medication errors, can be classified as simple human error (eg, unintentional errors or lapses), as risky behaviors (ie, "a conscious drift" toward actions in which the risks taken are unforeseen or mistakenly believed to be justified), or as recklessness, defined as willful disregard of unjustified risks. 7 Recommended remedies for these mistakes are, respectively, consolation, coaching to understand risks, and punishment, where corrective responses are based upon clinician behaviors rather than patient outcomes. 7 Cultures Compared Just culture and law enforcement both aim to prevent harm to persons or patients, property, and public interests. Just culture emphasizes the quality or desirability of an individual's choices and behaviors and apportions corrective actions or discipline on that basis more so than on the severity of the consequences. Criminal law, on the other hand, often focuses on outcomes, and while the law "generally disallow[s] criminal punishment for careless conduct, absent proof of gross negligence" (ie, a heightened level of negligence that may include recklessness), some "legislatures occasionally permit punishment based on ordinary negligence, primarily when the conduct is extremely dangerous and may cause harm to a significant number of people." 10 Just culture also attempts to differentiate degrees of intent or blame more finely than the law does. These gradations range from ordinary human error at the low end of culpability, to risky behaviors, recklessness, and, finally, purposeful action to inflict harm. 7,11 Criminal law often creates a "twilight zone" in its vague interpretation of the various degrees of negligence, ie, "willful," "wanton," "reckless," and "gross" negligence, which may encompass "recklessness." 12 In a just culture model, negligence encompasses both unintentional errors (accidents) and risky behavior (decisions) but not recklessness. 11 Instead of imposing punishments for all categories of failures of duty, just culture advocates acceptance and support for errors, coaching to change risky behaviors, and discipline or punishment for those whose actions are reckless because they were committed with knowledge of harm or with purposeful intent to harm. 7 Returning to the case example of medication error, those espousing a just culture perspective might observe that the nurse chose to override orders and warnings from the drug cabinet and that she neglected to confirm the drug, record the injection, and monitor the patient. However, the patient's death, though tragic, was unintended. Although the nurse's mistakes may have been numerous, they began with a human error of selecting the wrong medication. As a result, the nurse's culpability could be construed as being low (simple error or risky behavior), and the corresponding remedies would be support and education rather than criminal prosecution. In this vein, some might argue that her choices and her awareness of risk, not the outcome, should be the crucial determinants of the correct response. She would not be considered reckless if she was not cognizant of risks. Her attention might have been drawn elsewhere-to her trainee, for example.
Or, she might have been enculturated into daily workplace practices of using the override functions without fully appreciating the potential hazards, reflecting the human tendency to drift away from stringent adherence to standards. Just culture would consider this behavior risky but natural. 11 David Marx describes this "propensity to drift into at-risk behaviors" using an automotive example in which one driver is driving 9 miles per hour over the speed limit, while another driver may be driving 50 miles per hour over the speed limit and swerving wildly. The first driver is "drifting," not consciously aware of the risk, whereas the second driver is clearly driving with conscience knowledge of his or her recklessness. 11 Because the just culture model views "the propensity to drift" as "part of our human nature," mitigating at-risk behavior caused by "drifting" should be the focus in designing hospital patient safety programs. 11 Under a just culture model, punishment of the nurse in this case would erode confidence and trust among coworkers and institutions and deter open disclosure and discussion of mistakes made.
By contrast, those adopting a "finger pointing" stance (eg, one that might arise under criminal law) might argue that the nurse's actions were indeed criminally reckless rather than merely erroneous. Her actions could be akin to those of a driver who is texting or speeding and strikes a passerby, killing him or her; both the driver's and the nurse's actions were choices rather than mere errors, and the consequences were foreseeable and preventable.
Conclusion
The goal of minimizing mistakes, including human errors, is aided by culture and organizations that foster communication and education and punish only when warranted. A just culture model proposes that individuals working within a system should not be held responsible for mistakes or choices they make if that system fails to prevent foreseeable errors; rather, health systems and institutions should positively guide anticipated interactions and actively participate in monitoring, reporting, and fixing shortcomings to improve patient safety. | 2020-10-06T13:36:17.101Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "4e394c62641c20b04f9a1eebd9c03a6a3ed0ca49",
"oa_license": null,
"oa_url": "https://journalofethics.ama-assn.org/sites/journalofethics.ama-assn.org/files/2020-09/hlaw1-2009.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "30b072a723e0f6d93524fe46f2734c89e4bc6842",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
257362283 | pes2o/s2orc | v3-fos-license | Effects of Extraction vs. Non-Extraction Treatment on Soft Tissue Changes in Class I Borderline Malocclusion Cases
Aim and Objectives To determine the soft tissue changes between the two treatment groups, ,extraction and non-extraction, equally susceptible to both treatment options. s To compare the changes taking place in the soft tissue variables from one group to another using cephalometric analysis Materials and Methods
Introduction
One of the major reasons patients seek orthodontic treatment is to improve their facial appearanc 1 .In today's world, people are more concerned about their facial appearance.To improve the facial profile, employment of either one of the two important treatment approaches has been used that is extraction and non-extraction treatment protocols.Evaluating facial profiles and facial balance is a continuous learning process for orthodontists.The debate concerning the extraction of teeth and its effect on the facial profile began more than 100 years ago. 2 Orthodontists have long recognized that the extraction of premolars often is accompanied by changes in the soft tissue profile.At times, these changes result in substantial improvements in the profile and frequently justify the extraction of teeth in patients without other indications 3 .The objectives of orthodontic treatment are to attain optimal functional occlusion and harmonious facial esthetics and to maintain those results.5] Orthodontic treatment with fixed appliances includes two exclusive treatment modalities; extraction; and nonextraction.Extraction treatment is mostly used to relieve moderate to severe crowding and sometimes also to correct dental or dentoalveolar protrusion.On other hand, Nonextraction treatment is selected or preferred for the cases with minor skeletal and moderate dental discrepancies.
The choice between extraction and nonextraction treatment is usually based on orthodontic training, treatment philosophy, or temporal trends 6-7.In the orthodontic literature, the perception of ideal facial esthetics, mainly identified with the patient's profile, and the employment of either one of the two main treatment approaches (extraction or nonextraction) have been highly controversial issues.The controversy becomes even greater when dealing with borderline cases. 1 In extraction therapy, orthodontists1 have long recognized that the extraction of premolars often is accompanied by changes in the soft tissue profile.At times, these changes result in substantial improvement in the profile and Frequently justify the extraction of teeth in patients without other indications.At other times, however, premolars extraction can lead to a flatter profile.For this reason, a carefully studied extraction policy, accounting for all possible changes, would be very valuable [8][9] .The studies of Angelle and Hersey showed that the changes in tooth position are not systematically followed by proportional soft tissue profile changes.Variables, such as lip morphology, type of treatment, extraction vs nonextraction therapy, choice of extraction, patient gender and age have been held responsible for individual differences in soft tissue response 10-11.Therefore the purpose of the study is to determine the soft tissue changes between the two treatment groups, extraction and non-extraction, equally susceptible to both treatment options and To compare the changes taking place in the soft tissue variables from one group to another using cephalometric analysis.
Materials and Method
The present study was conducted on 50 orthodontically treated patients, that were divided into two groups, extraction group (25 patients) and nonextraction (25 patients).The pretreatment and post-treatment lateral cephalograms were obtained from the Department of Orthodontics, Pandit Deendayal Upadhyay Dental college, Solapur.
Criteria for Patient Selection
All patients were with a full complement of teeth Exclusion criteria-1) congenitally missing teeth 2) congenital anomalies 3) facial asymmetries Inclusion criteria-Patients with Class I dental and malocclusion cases, treated with or without extraction of premolars.Method - Tracing on the the pre and post treatment lateral cephalograms was done on the acetate sheets of 0.5 microns in thickness by using sharp pencil of 0.3mm diameter To assesss the soft tissue changes, the measurements shown in table no.1 and figures 1,2,3 are used.Group was done by applying Student's Unpaired 't' test at 5% (p< 0.05) and 1% (p< 0.01) level of significance.Also, One way ANOVA (Tuckey Kramer multiple comparison test) at 5% (p<0.05) and 1% (p< 0.01) level of significance was used to test the difference between mean values of all parameters from Pre to Post treatment together in Extraction and Non-extraction groups.
Intergroup Postreatment Differences
The mean change values for the upper lip to Eplane were 5.60 mm for the extraction and -2.80 mm for the nonextraction group.The lower lip was retracted -4.00mm relative to the E-plane in the extraction group and brought forward -0.40mm in the nonextraction group.In relation to Burstone's Sn-Pg' line, the upper lip was retracted -4.00 mm in the extraction and -0.70 mm in the nonextraction group, whereas the lower lip was retracted -3.50mm and brought forward 2.80 mm, respectively.From the measurements estimating lip thickness and sulcus depth, only the mean value change for upper lip thickness proved to be statistically significant (P <0.05), exhibiting an increase of 3.00 mm in the extraction vs 0.80 mm in the nonextraction group.The nasiolabial angle had a statistically significant (P <0.01) increase of 13.9degree within the extraction group and a decrease of -0.70 degree within the nonextraction group.Table no 2 shows the results of the two sample t-tests that were run to evaluate differences in the mean value changes between the two different treatment groups.
Discussion
The success of orthodontic treatment is always influenced by the ability of clinician to develop an optimal treatment plan, the morphologic relationships and proportions of the nose, lips and chin determine facial harmony in orthodontics. 8he main purpose of the present study is was to compare the effects on the facial profile by the first premolar extraction between a sample of patients where premolar extractions were considered necessary and a similar sample where a conservative treatment was applied.
Lip structure seems to have an influence on lip response to incisor retraction.3][14] In the extraction group, the upper and the lower lips moved back relative to the E-line and Sn-PG' line.In non-extraction group, upper lip was slightly retracted and lower was slightly protracted.
Upper Lip
Considering Ricketts E-plane, the upper lip exhibited a 5.60mm retraction in the extraction group vs -2.80mm retraction in the nonextraction group.Relative to the Sn-Pg' line, the difference between the two groups was significant.Since this plane is considered a plane of minimal variation, all relative measurements are less influenced by any potential growth remainders. 15The change value for the extraction group (-4.0 mm) is slightly smaller than that reported by Drobocky and Smith 3 and Bravo 13 -2.12mm and -2.4 mm, respectively.The amount of upper lip retraction is smaller than that assessed to the E-plane.A possible explanation is that slight growth of the nose might have contributed to the whole retro positioning of the lip.The nonextraction patients exhibited a nonsignificant change of -0.70 mm, With regard to upper lip thickness, the difference in increase between the two groups was also significant: 3.00 mm for the extraction and 0.80mm for the nonextraction group.
Lower Lip
The mean value changes for the lower lip differed significantly between the two groups were greater than those of the upper lip.In relation to the Ricketts E-plane, the -4 mm of retraction that the extraction patients exhibited is close to the -23.8 and -23.22 mm that Bravo 16 and Drobocky and Smith 3 reported.The measurements to Burstone's Sn-Pg' line confirmed that, the patients treated without extractions, the findings indicate that the lower lip was protracted 2.80 mm
Nasiolabial Angle
The nasiolabial angle became 13.9degree more obtuse in the extraction group.The mean change value for the nonextraction group was -0.70degree.These findings agreed with the results of Finnoy et al, who found that their extraction group had a significantly greater increase of the nasolabial angle than the nonextraction group. 17he findings of the present study indicate that, when a decrease of lip procumbency is desirable, extracting premolars and retracting incisors is a viable option to achieve these objectives.However, individual variation in response is large.
Incisor retraction in one patient might lead to a large amount of lip retraction, whereas, in another patient, a similar amount of retraction might lead to only minimal improvement in lip procumbency. 2Its always better to inform patient about the expected average change, but also that it could be different in particular instance.With sound diagnoses and good treatment, major differences in the soft-tissue profile should not necessarily be produced, irrespective of treatment with or without extraction of premolars.Therefore, the avoidance of extracting premolars for fear of significant detrimental effects on the face might not always be justified. 4To treat patients nonextraction for the sake of not removing teeth, ease of treatment, or the dictates of an appliance is not sound reasoning and makes as much diagnostic sense as treating all patients with the extraction of all four first premolars.In other words, it is just as diagnostically wrong to treat an extraction patient nonextraction as it is to treat a nonextraction 19 patient with extractions.The truth lies somewhere in between and is based on a sound quantified measurement analysis, differential evaluation of the problem, and clinical assessment.
Table No . 2 :
Extraction vs Non extraction: Descriptive and Inferential Statistics of Mean Value Differences: Table No.3:Comparison of mean and SD values of all parameters from Pre to post treatment in Extraction group (n=25) Comparison of all parameters from pre and post treatment in extraction group Graph 2 -Comparison of all parameters from pre and post treatment in extraction group By applying Student's Paired 't' test there is a Pre treatment to Post treatment significant difference between mean values of parameters Ls-E plane (mm), Li-Eplane(mm), Ls-Sn-Pg'(mm), Li-Sn-Pg'(mm), Is-Ls(mm), Sn-Ls(mm) and Nasolabial angle(Degree).While no significant difference found for the parameters G-Sn-Pg' (Degree), Ls-St(mm), Ii-Li(mm) and Li-Pg'(mm) in Extraction group.Table No.4: ANOVA TEST: For Pre to Post treatment values for all parameters compared together Extraction group
Table No . 6 :
ANOVA TEST: For Pre to Post treatment values for all parameters compared together Non-Extraction group Comparision of all parameters from pre and post treatment in extraction group Graph 4-Comparision of all parameters from pre and post treatment in Non-extraction group
of mean values of all parameters from Pre to post treatment in Non-Extraction group Pre Treatment Treatment
Orthod 1982;81:481-8.16.Bravo LA.Soft tissue profile changes after orthodontic treatment with four premolars extracted.Angle Orthod.1993;64:31-42.17.Finnoy JP, Wisth PJ, Boe OE.Changes in soft tissue profile during and after orthodontic treatment. | 2023-03-06T16:04:55.479Z | 2023-02-28T00:00:00.000 | {
"year": 2023,
"sha1": "b2141d32dbafaf8f319c6a3994b7e995b1d39c80",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v11i2.29",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6675f002c1ef8a125aea5b6ae215713ff47ac79a",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
234068913 | pes2o/s2orc | v3-fos-license | Experimental analysis of small scale water cooler equipped with TTHC heat exchanger
In the present work, experiments were conducted to recover waste heat from water cooler condenser. A Tube in Tube Helical Coil (TTHC) heat exchanger was made from cu pipe of 4.47 mm outer diameter of inner tube and 8.30 mm inner diameter of outer tube replaced the water cooler condenser to recover the waste heat. A new empirical correlation was suggested to predict the annulus Nusselt number for laminar flow of water with Reynolds number ranging from 1500 to 3900 and Prandtl number approx 1.6. The results reveal that the annulus Nusselt number varies with volume flow rate through annulus of helical tube significantly in laminar flow region. The small scale water cooler equipped with TTHC heat exchanger recovered maximum 371.5 W heat by cooling water.
Introduction
Helical coiled tube heat exchanger is one of the passive methods to enhance the heat transfer which do not require external power for enhancement of heat transfer. A passive method commonly uses geometrical modification to the flow channel for enhancement and compact in size [1][2][3][4]. Helical coiled heat exchanger is used to transfer heat between two or more fluid for numerous heating and cooling in industrial and engineering applications, such as cooling in electronic industries, refrigerator & air-conditioning units, thermal power plant, heat recovery system, domestic water heater, space vehicles, automobiles, etc. The current work mainly focus on heat recovery by cooling water from small scale water cooler equipped with TTHC heat exchanger.
Many researchers Prabhanjan et al.
[5], Mao et al. [6], Dravid et al. [7] have reported that the secondary flow occurs in helical tube due to centrifugal force which enhances the heat transfer.
Elsayed et al. [8] studied small scale cooling system with helical coil condenser and evaporator. A mathematical model was developed for this system and result found that the COP increases as the coil diameter decreases. Seban and Mclaughlin [9] Studied on heat transfer in helical coiled tube with laminar flow of medium heavy freezene oil and turbulent flow of water experimentally and developed a correlation for two coil diameter to tube diameter ratios are 17 and 104. Similar work was done by Rogers and Mayhew 1964 for three different coil diameters to tube diameter ratio such as 10.8, 13.3 and 20.12 and compared with the result of Kalb and Seader [10]. Xin and Ebadian [11] discussed the effect of Prandtl numbers on convective heat transfer coefficients in helical tubes, for three different fluids air, water and ethylene glycol and developed a empirical expression from experimental data and some data from previous work. The above literature shows that the numerous studies carried out on helical coil coiled tube as well as tube-in tube helical coil heat exchanger. Although there are number of correlations were developed for inner tube side helical coil tube but very few correlations are available on annulus region of tube-in tube helical coil heat exchanger. In the present work, Nusselt number was analyzed through annulus of Tube-in Tube Helical Coil (TTHC) heat exchanger by using experimental data from suggested correlation.
Material and methods
The test section used in the present work consists of soft copper tubing as shown in figure 1. Initially the equal lengths (L) of both inner as well as outer straight tube of 9 m were taken. The geometrical parameters of TTHC heat exchanger is given in table 1.
Figure 1.TTHC heat exchanger
The inner tube of outer diameter 4.47 mm is inserted into outer tube of inner diameter 8.3 mm and fine sand particles filled in inner tube as well as annulus space in order to maintain the smoothness of the inner surface. The TTHC heat exchanger of 282 mm coil diameter is formed with the help of wooden pattern and flushes the sand particles after coiling. with accuracy within 1.5 % of full scale and an outer helical tube. The secondary closed loop consists of a complete set of water cooler but the present work considering only condenser part that is TTHC heat exchanger. The test section is a TTHC heat exchanger consisting of an inner helical tube in which refrigerant flow at a constant rate for maintaining its outer surface at 42±0.2℃ and an outer helical tube (annulus passage) through cooling water as working fluid flow at different rate. The outer surface of TTHC heat exchanger is insulated to minimize the heat loss to the surrounding. In this experiment, there is five k type thermocouples were used out of these; one is fixed in cooling water tank and remaining four were inserted into drilled hole at each inlet and outlet of inner and outer helical test section. The epoxy is used at the drilled hole to prevent the leakage of fluid. Two four channel temperature indicator were used to measure the temperature at each point.
Analysis of experiment
In the present work, thermo-physical properties of water are obtained as a function of mean film temperature Kays et al. [12]. are the inner diameter of inner tube, outer diameter of inner tube, inner diameter of outer tube, outer diameter of outer tube of TTHC heat exchanger respectively.
Dean number of water flow in annulus of helical tube based on equivalent diameter is defined by Eq. (5) Figure 4 shows the variation of annulus Nusselt number at different volume flow rate of cooling water. It can be seen that annulus Nusselt number increased with the volume flow rate of cooling water. The present results compared with the previous works and found appropriate results.
Results and discussion
A new correlation Eq. (7) (7) for 1500 < Re < 3900 and Pr = 1. 6 The experiments were carried out to obtain annulus Nusselt number from Eq. (7) using experimental data. The annulus Nusselt number of TTHC heat exchanger were compared with the previous developed correlations by Seban Figure 5.Variation of heat recovery by cooling water from R-134a versus volume flow rate of cooling water Figure 5 shows the effect volume flow rate in annulus of helical tube on heat recovery from R-134a. Heat recovery from refrigerant increased as the water flow through annulus of helical tube increases.
Conclusion
In this study, experiments were conducted on TTHC heat exchanger. The maximum 371.5 W heat recovered by cooling water at 40 l/h from small scale water cooler equipped with TTHC heat exchanger. Based on equivalent diameter, a new set of correlation was suggested for calculating the annulus Nusselt number for laminar flow of cooling water. The present suggested correlation results were found to be better agreement with the previous suggested correlations within 1500 < Re < 3900, Pr = 1.6 and 77 . 25 | 2021-05-10T00:04:05.246Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "4ac982edf709f65cf2a54556e7701ff070f4a93a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1080/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0e6300c39ec8bf8db76ce91a8a50ec41e3f86483",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
119307191 | pes2o/s2orc | v3-fos-license | On computing distributions of products of non-negative independent random variables
We introduce a new functional representation of probability density functions (PDFs) of non-negative random variables via a product of a monomial factor and linear combinations of decaying exponentials with complex exponents. This approximate representation of PDFs is obtained for any finite, user-selected accuracy. Using a fast algorithm involving Hankel matrices, we develop a general numerical method for computing the PDF of the sums, products, or quotients of any number of non-negative random variables yielding the result in the same type of functional representation. We present several examples to demonstrate the accuracy of the approach.
Introduction
Consider two non-negative independent random variables X and Y with probability density functions (PDFs) f and g. It is well known that the PDFs of their sum, X + Y , is given by the convolution (1.1) s (t) = t 0 f (t − y) g(y)dy, the PDF p of their product, XY , is given by where δ is the delta function, or alternatively, as and the PDF q of their quotient, X/Y , is given by In this paper we introduce a new approximate representation of PDFs of nonnegative random variables via a product of a monomial factor and a linear combination of decaying exponentials with complex exponents. Importantly, representing PDFs in this form allows us to evaluate these integrals numerically so that the resulting PDFs have the same functional representation as the original PDFs and, thus, can be used in further computations. Essentially, we provide algorithms to represent the PDF of a non-negative random variable within any user-selected accuracy via an optimal linear combination of Gamma-like distributions with a common shape parameter and possibly a complex valued rate parameter (with a negative real part). By optimal we understand a linear combination with minimal number of terms for a given accuracy. We note that while in principle it is possible to use a representation with only a linear combination of decaying and oscillatory exponentials, introducing an additional monomial factor to account for a possible rapid change of the PDF near zero makes the approximation significantly more efficient.
Among the operations on random variables mentioned above, computing the PDF of their product is particularly difficult. It is well known (see e.g. [27]) that the Mellin transform of p in (1.3) is equal to the product of the Mellin transforms of f and g (the function p is the so-called Mellin convolution of f and g). However, numerical implementation of the Mellin transform has not resulted in a reliable numerical method. The only universal method currently available for computing the PDF of the product of two non-negative independent random variables relies on a Monte-Carlo type approach, where one samples the individual PDFs, computes their products, and collects enough samples to achieve certain accuracy in the computation of p in (1.3). However, due to the slow convergence of such methods (typically 1/ √ N , where N is the number of samples) achieving high accuracy is not feasible.
Over the years, for particular PDFs f and g, there have been a number of results showing that p may be computed using special functions or a series expansion [13,28,23,24,10,29,26]. While such results are appropriate for specific distributions, they do not provide a universal method to compute p.
Our representation of PDFs of non-negative random variables differs significantly from the one we developed for random variables on the real line in [9]. For realvalued random variables with smooth PDFs (except possibly at a finite number of points where they can have integrable singularities), we use the approximation via multiresolution Gaussian mixtures in [9]. In contrast, our new representation is tailored to non-negative random variables and accounts for the boundary point (that is, zero) near which PDFs can be rapidly changing.
Our approach relies on several algorithms to construct, for a given accuracy, a (near) optimal representation of functions via a linear combination of exponentials. These algorithms have their mathematical foundation in the seminal AAK theory for optimal rational approximations in the infinity norm [2,3,4]. This theory relies on properties of infinite Hankel matrices (Hankel operators) constructed from the functions to be approximated. Practical algorithms use finite Hankel matrices and their singular value decomposition as in [21,18,19,20] or (a related) con-eigenvalue decomposition as in [6,16]. These algorithms effectively use analysis-based approximations rather than a straightforward optimization and are well suited for our purposes.
We introduce our representation for PDFs and derive PDFs for sums, products and quotients of two random variables in this representation in Section 2. In Section 3 we briefly describe algorithms we use for computing a near optimal representation of functions via a linear combination of exponentials as well as a fast algorithm for computing the Singular Value Decomposition (SVD) of a low rank Hankel matrix. We illustrate our approach by numerical examples presented in Section 4 and, in Section 5, we show that expectations of functions of non-negative random variables are easily evaluated using our new representation of their PDFs. Finally, we briefly discuss further work in Section 6.
2. Representation of PDFs of sums, products and quotients of non-negative independent random variables For a user-selected accuracy , we approximate the PDF f X of a non-negative random variable X as and M is as small as possible. Given two non-negative independent random variables X with PDF (2.2) and Y with PDF we demonstrate that the PDFs of their sum X + Y , product XY and quotient X/Y can be represented in the same functional form thus enabling a numerical calculus of non-negative random variables. We note that while in some cases it may be possible to avoid using the explicit factor x α−1 in (2.2), this factor significantly reduces the number of terms required if f has a rapid change near the origin. In this section we derive formulas (in terms of special functions) for the results of these operations on independent random variables with PDFs of the form (2.2) and (2.3). Then, in Section 3, we show how to obtain the approximation of f X (and similarly, of g Y ) in our desired functional form (2.2) and how to convert the results of operations on them back into the same functional form.
We start by deriving the PDF of the sum s of two non-negative independent random variables X and Y . We show that Lemma 2.1. The PDFs in (1.1) of the sum of two non-negative independent random variables X and Y , with PDFs f and g in (2.2) and (2.3), can be written as Γ is the gamma function and 1 F 1 is the confluent hypergeometric function, Remark. The fact that the sum is independent of the order of X and Y , p X+Y = p Y +X , follows from Kummer's first transformation of 1 F 1 [5, p. 191].
Next we show how to compute the PDF of the product p of two non-negative independent random variables X and Y .
Lemma 2.2. The PDF p in (1.3) of the product of two non-negative independent random variables X and Y , with PDFs f and g in (2.2) and (2.3), can be written as t |α−β|/2 K |α−β| 2 tξ m η n and K is the modified Bessel function of the second kind.
Assuming α ≥ β, we obtain Assuming β > α, we have and combining both cases, obtain In order to represent the PDF p of the product in the form (2.2), we need to approximate v (t) as a linear combination of exponentials. With that goal and following [8], we first discretize (2.11) using the trapezoidal rule. Since the approximation obtained via this discretization may have an excessive number of terms, we can then use the algorithm in [16] to minimize their number.
Finally, we show how to compute the PDF of the quotient q of two non-negative independent random variables X and Y .
Lemma 2.4. The PDF q in (1.3) of the quotient X/Y of two non-negative independent random variables X and Y , with PDFs f and g in (2.2) and (2.3), can be written as where (2.14) Proof. We rewrite (1.4) as we arrive at the result.
Since we would like to maintain the form (2.2) of the PDFs of the sum s, product p and quotient q, we seek their representation as where the number of terms J 1 , J 2 , and J 3 are (near) optimal in each of these constructions. We solve this approximation problem by first sampling at equally spaced nodes the functions r, v or w of Lemmas 2.1-2.4, forming a Hankel matrix with these samples, and then applying the algorithms described in the next section.
In what follows, we denote by v the function that we seek to approximate by a linear combination of decaying and possibly oscillatory exponentials, where the number of terms, M , is as small as possible.
Algorithms for computing exponential representations
Numerical approximation of functions by exponentials can be understood as a finite dimensional version of AAK theory [2,3,4]; this connection has been addressed in e.g. [6,7,16]. Here we only briefly describe algorithms already developed for this purpose. We show how to compute the exponents ω k and coefficients c k in (2.16) from 2N +1 equispaced samples v n = v (Rn/ (2N )), n = 0, . . . , 2N , where the range R and the step size R/ (2N ) are chosen so that v (t) is sufficiently over-sampled and |v (t)| < , t ≥ R. Thus, we solve the discretized problem where we seek nodes γ k and coefficients c k so that the number of terms M is minimal. The exponents ω k in (2.16) are related to the nodes γ k by Currently there are two algorithms for obtaining the approximation (3.1); both use the Hankel matrix constructed from the samples v n , [21,18,19,20] and [6,25]).
We first present the key steps of the so-called HSVD (or matrix pencil) algorithm [21,18,19,20]. In Algorithm 1, X † denotes the pseudo-inverse of the matrix X, X (m : n, :) denotes the sub-matrix consisting of rows m through n, X = (x 1 . . .
Algorithm 1 Computing exponential representations I
(1) For a desired accuracy , compute M con-eigenvectors and corresponding con- to this problem is guaranteed by Tagaki's factorization [17] and may be reduced to finding the SVD of H. An alternative algorithm, described below as Algorithm 2, was introduced in [6] (see also [8,16,25]); it relies on solving a con-eigenvalue problem for the Hankel matrix H (solution of which is guaranteed by Tagaki's factorization [17]) and may be reduced to finding the Singular Value Decomposition (SVD) of H. Unlike Algorithm 1, it requires a single con-eigenvector of the same Hankel matrix as in Algorithm 1. We have implemented and used both algorithms.
Importantly, we implemented a fast SVD solver for Step 1 of Algorithms 1 and 2 using the randomized approach developed in [11,22,15] and the fact that Hankel matrices can be applied in O (N log (N )) operations using the Fast Fourier Transform (FFT). We describe it as Algorithm 3 and note that the number M of singular vectors needed to achieve accuracy is usually unknown. If M is chosen correctly, then the smallest pivots computed in Step 2 of Algorithm 3 will be less than . However, if all pivots are greater than , then by doubling M , Steps 1 and 2 of Algorithm 3 can be repeated until the desired size of pivots is achieved.
Algorithm 2 Computing exponential representations II
(1) Given , the desired accuracy, compute the con-eigenvector u M and the corresponding con-eigenvalue where σ 0 is the largest con-eigenvalue. A solution is guaranteed by Tagaki's factorization [17] and may be reduced to finding the M + 1 singular vector of H. We note that we can always check the approximation error a posteriori, using e.g. the already computed values v n = v (Rn/ (2N )), and select a smaller singular value, if necessary. The connection between the accuracy and the ratio of the M th and the largest singular values, σ M /σ 0 in Algorithms 1 and 2 is one of the key features of AAK theory [2,3,4] for semi-infinite Hankel matrices.
Remark 3.1. The function v in (2.16) may change rapidly near zero (e.g. it can have a logarithmic singularity at zero) so that it requires sampling with a small step size. Due to the equally spaced sampling of v (see (3.1)), the size of the matrix H in Algorithms 1 and 2 can be large. Although we have a fast algorithm for computing the SVD of a large matrix (Algorithm 3), we can instead apply Algorithm 1 or 2 several times using first a coarse sampling (sufficient in an interval away from zero) and then subtracting the result from v so that the essential support of the difference is reduced. Specifically, given a function v (t) with |v (t)| < , t ≥ R, we approximate it byṽ (t), where eachṽ i (t) is obtained using Algorithms 1 or 2 by sampling the function where α > 0 and β > 0 are called the shape and rate parameters. Worth noting are two special cases of the Gamma distribution: when α = 1 it is called exponential distribution, when β = 1/2 it is known as chi-squared distribution. We also note that the PDF of the Gamma distribution is already in the form (2.2) that we want to maintain. For Gamma-distributed random variables X ∼ f γ (x; 2, 2) and Y ∼ f γ (x; 3, 2) we compute the PDF p Z of their product, Z = XY . The product PDF is available analytically as p (t) = 32t 3/2 K 1 4 √ t , where K 1 is a modified Bessel function of the second kind. Using (2.5) we compute the PDF of the product p Z and compare it with the analytic result p. The PDFs of the random variables X and Y are displayed in
4.1.2.
Product of Nakagami random variables. The distributions of the product of Nakagami random variables have applications in wireless communication systems [29]. The PDF of a Nakagami distributed random variable is given by where m ≥ 1/2 and Ω > 0 are called the shape and spread parameters; as is well known, the Nakagami distribution is related to the Gamma and Chi distributions. In this example, we compute the PDFs of the product of two, four and eight Nakagami distributed random variables. Given the random variable X ∼ f N (x, 1, 1) = 2xe −x 2 (see Figure 4.5), we first employ either Algorithm 1 or 2 on the Gaussian part of f N , g (x) = 2e −x 2 , to obtain its approximation,g (x), in the form (2.2) with α = 1. To obtaing it is sufficient to sample g on the interval x ∈ [0, 6] and use Algorithm 2 to solve (3.1) with R = 6 and N = 500, where we set = 10 −11 . In After obtaining an accurate approximation f X of f N , usingg in the form (2.2), we compute the PDF p Y of the random variable Y = X 2 using (2.7) and display it in Figure 4.5. The exact product PDF is available analytically as p (x) = 4xK 0 (2x), where K 0 is modified Bessel function of the second kind. The error, is displayed in Figure 4.6. Figure 4.7 shows the location of the complex nodes ξ m in the representation of p Y . Using the computed PDF p Y , we compute the PDF p Z of the product of four Nakagami distributed random variables, Z = Y 2 = X 4 (see Figure 4.8). Likewise, we compute the PDF p W of the product of eight Nakagami random variables and display the result in Figure 4.9.
4.1.3.
Product of Lomax and Gamma random variables. As another example, we compute the PDF of the product of a Gamma distributed random variable with PDF given by (4.1) and a Lomax distributed random variable with PDF given by where α > 0 and λ > 0 are called the shape and scale parameters. In this example we use the Gamma distributed random variable X ∼ f γ (x; 3, 2) and the Lomax distributed random variable Y ∼ f (y; 5, 2), and compute the PDF p Z of the product Z = XY . We illustrate the PDFs of the random variables X, Y , and Z in
4.1.4.
Product of Weibull and Nakagami random variables. Next we consider the Weibull distributed random variable with PDF given by where k > 0 and λ > 0 are called the shape and scale parameters. We use a Weibull distributed random variable X ∼ f w (x; 1, 1.5) and the random variable Y obtained in Example 4.1.2 (the product of two Nakagami random variables), and compute the PDF p Z of the product Z = XY . We display the results in Figure 4.11. 4.1.6. Heavy-tailed distribution. Our approach remains valid for heavy-tailed distributions. As an example, let us consider a random variable X with the standard Cauchy distribution, and compute the distribution for |X| 2 . In this case the integral defining the PDF of |X| 2 is evaluated explicitly, which allows us to estimate the error of our numerical approach. We start by approximating the Cauchy distribution (4.6) via exponentials in the form (2.2). Using the Laplace transform, we have Unfortunately, discretizing this integral directly via the trapezoidal rule requires too many terms to achieve an accurate approximation for small values of x. Therefore, we discretize (4.8) only to approximate the tail of (4.6). We obtain 15 terms. Finally, we remove all terms of f head (x) + f tail (x) with weights less than 0.33 · 10 −12 leaving 73 terms in the resulting approximation of f exact (x). In To compute an approximation for the distribution of |X| 2 , we use p (t) in Lemma 2.2 where α = β = 1 and, thus, In order to find an exponential approximation for p (t), we first use 40, 000 equally spaced points with step size h = 2 to discretize (4.11) and use Algorithms 1 or 2 to approximate the "tail" of p. The resulting approximation with 30 terms is valid in the interval [9.18, ∞). We then consider the difference p 1 head (t) = p (t) − p tail (t) in the interval 10 −4 , 9.18 , discretize p 1 head (t) in this interval using 100, 000 equally spaced points, and use one of the mentioned algorithms to obtain an approximation for p 1 head (t) with 40 terms. We then consider p 2 head (t) = p (t) − p tail (t) − p 1 head (t) in the interval 10 −7 , 0.5814 · 10 −3 , discretize p 2 head (t) in this interval using 50, 000 equally spaced points, and again use Algorithms 1 or 2 to obtain an approximation for p 2 head (t) with 25 terms. In Figure 4.14 we display the error of the final approximation with 95 terms, (4.12) err p (y) = log 10 p exact (10 y ) − p tail (10 y ) − p 1 head (10 y ) − p 2 head (10 y ) + 10 −20 , where y ∈ [−7, 8]. We note that in this example we were not seeking an optimal representation of the form (2.2) of the distribution for |X| 2 .
Computing expectations of functions of random variables
An important use of representing the PDF p Z of a non-negative random variable in the proposed functional form is to compute, for a function u, the expectation of the variable u (Z), If the function u is given analytically, i.e. we can evaluate it at any point, the fact that we have a functional representation of p Z allows us to use an appropriate quadrature to evaluate this integral to any desired accuracy. Moreover, if the function u admits a representation via exponentials (which can be computed via Algorithms 1 or 2), the expectation (5.1) can be evaluated explicitly. If only samples of the function u are provided, then we can treat p Z as a weight and construct a quadrature with nodes at locations where the values of u are available. If the function u is a monomial, i.e. when computing the moments of random variable Z, we can use the explicit integral ∞ 0 x α−1 e −ηx dx = η −α Γ (α) , Re (η) > 0, α > 0.
For example, given the PDF of a random variable Z in the form (2.2), we compute its first moment m 1 as We note that while our algorithms do not guarantee that the moments are preserved exactly, the accuracy of the resulting moments is controlled by the overall accuracy of the approximation. In particular, we can always enforce ∞ 0 p Z (x) dx = 1 by imposing an additional linear constrain on the coefficients a l of the exponential approximation.
Conclusions and further work
For any user-selected accuracy, we have developed an approximate representation of non-negative random variables (2.2) that allow us to compute the PDFs of their sums, products and quotients in the same functional form. The monomial factor in the functional form in (2.2) is chosen to accommodate a possible rapid change of the PDFs of non-negative random variables near zero.
We demonstrated accuracy and efficiency of the resulting numerical calculus of PDFs on several numerical examples. In order to account for the boundary at zero, we use a different representation of the PDFs of non-negative random variables than our previous construction for random variables defined on the whole real line [9]. Clearly, Gaussian mixtures used in [9] do not have support restricted to the positive real axis and, thus, could not yield an efficient representation.
While there are clear advantages to our new approach for computing PDFs in comparison with Monte Carlo-type methods, we do not compare the two in this paper. We plan to address such comparison elsewhere in the context of practical applications. | 2018-02-10T00:23:54.000Z | 2017-07-24T00:00:00.000 | {
"year": 2017,
"sha1": "1fd01c553f52d6d64bcacb3f9f677c8330f53e06",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.07762",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4402802b897bd024a98d7b5fcca5c65a65637068",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
219877826 | pes2o/s2orc | v3-fos-license | Determinants Influencing Entrepreneurial Intention in Hanoi, Vietnam
This research employed survey data from 204 students between two groups of economics and technical majors in Hanoi city for assessing the impact levels of determinants on entrepreneurial intention. The results show that a number of determinants including Need for achievement, Self-efficacy, and Instrumental readiness have positive impacts on student’s entrepreneurial intention. Besides, this study is also to create a basis for comparative students among different economics and technical majors, work exoperience, and gender. These findings are the basis to recommend policies and solutions to promote entrepreneurship movement in Vietnam.
Introduction
Vietnam in general and Hanoi capital in particular have witnessed the emergence of a new "flow" of remarkable entrepreneurship. Despite the difficulties, the start-up and entrepreneur market in Vietnam is still on the top 3 of thriving start-up and entrepreneur markets in Southeast Asia, along with Thailand and Indonesia. This proves that the Vietnam start-up movement has extraordinary vitality and potential development. Clearly, Vietnam is witnessing the rise of a strong generation of young businessmen. They are confident, drastic and dare to create jobs for themselves and for many others However, the impression that these companies left is not remarkable. According to VCCI (2015), young businesses are likely to collapse in the first years of running a business: it is estimated that in the last 2 years, 80% of 100 startups face the risk of dissolution in the first working year. The main reasons are the lack of capital (occupied 40%), the lack of knowledge about the management skill for small and medium enterprises (accounting for 50%), the lack of practical experience in the business environment (accounting for 30%). In other words, there is a high proportion of young students and entrepreneurs looking for luck in starting a business.
Along with the rocketing importance of entrepreneurship in business practice. An increase in the level of interest in entrepreneurship is also found in academia, and many researches in this field have also been conducted to investigate startup and related aspects such as intention and entrepreneurial behavior (Bird, 1988, Kolvereid, 1996, Tkachev and Kolvereid, 1999, Mazzarol and Soutar, 1999, Misra and Kumar, 2000, they proposed the different behavioral directions and entrepreneurial intentions. For example, Mazzarol and Soutar (1999), based on previous studies, proposed two premise factors to start a business, which are the environment and the personality.
The main objective of this study is to examine the personal and environmental factors that influence entrepreneurial intention. In addition, the study also consider whether there is a difference between male and female students, between the technical students and the economics students, between the students who have worked and students who have not worked in assessing the impact of factors on student's intention to start a business in Hanoi.
political, economic conditions, and infrastructure with institutions (Kristiansen, 2001(Kristiansen, , 2002b. In influencing the intentions and behaviors of entrepreneurs, it is not only affected by the nature and context but also significantly depended on the environment of the business. Anderson (2000) studied some startups on the outskirts of the Scotland highlands and discovered that they were either ignorant discrete of the entrepreneurship. The goal of the environment was not realistic, but now it has become a real problem for businesses. In addition, the instrumental readiness are environmental effect can sometimes be used to denote all external factors affecting a business (Gartner, 1985). There are three contextual factors that are often significantly considered by potential business entrepreneurs: capital access, information access and social networks.
-Capital access: Capital access is clearly one of the most obstacles for new startups, especially in a developing economy with the shortage of venture capital funds. Funds may come from personal savings, family support, credit systems, community savings funds or financial institutions and banks.
-Information access: According to Anand Singh and Krishna (1994), in the investigations of entrepreneurship in India has shown that the acumen in information search is one of the business characteristics. It shows the frequency of an individual's contact with various sources of information. The outcome of this activity often depends on the firm's ability to information access, through personal capacity, personal savings or community savings funds along with the social networks. In a study of agricultural businesses in Java , Kristiansen (2002a) found that access to new information is really important to the existence and progress of businesses. The availability of a new information is identified by personal characteristics, such as education level and quality of infrastructure, communication coverage and telecommunication systems.
-Social networks: Social networks are seem to be a transportation for entrepreneurs to reduce risks and transaction costs while ameliorating ideas access, knowledge and business capital (Zimmer, 1986). Entrepreneurship research has increasingly reflected the general spirit that entrepreneurs and new companies must engage in networking to survive (Huggins, 2000). Social networks include a series of formal and informal relationships between the central character and other characters in a group of acquaintances and channel representatives; thus, entrepreneurs are entitled to access to necessary resources to start a business, grow and succeed (Kristiansen and Ryen, 2002). The last one, about the contextual factors which significantly impact on entrepreneurial intention, we suggest that an individual's perceptions of the company's ability to capital and information access or the quality of social networks are seem to be a measurable factor which impacts on entrepreneurial intention. As a result, the following hypothesis is stated: H4: The instrumental readiness has a positive impact on students' entrepreneurial intentions.
Demography and individual background
A numerous of previous studies have argued that personal factors such as gender, major and work experience influence on students' entrepreneurial intentions. According to research by Haus et al. (2013), the intention of start a business of average man is higher than the figure for women. Nonetheless, the gap of gender in entrepreneurship and motivational structures is too small and cannot explain completely the significant differences when forming a business.
On the other hand, the study of Trang (2018) concentrated on evaluating the factors affecting the entrepreneurial intention of engineering students in Vietnam, comparing the impact of factors on the entrepreneurial intention of different technological student groups. Similarly, Hiệp et al. (2019) investigation mainly analyzed the factors affecting the entrepreneurial intention of economic students in Ho Chi Minh City. In the world, the major is also included in one of the factors affecting the entrepreneurial intention of students belong to economic and technical major, the difference in the entrepreneurial intention of the economic and technical students is estimated in the study of Maresch et al. (2016).
Research by Indarti and Krinstiansen (2003) shows that work experience is also a factor affecting Norwegian students' entrepreneurial intentions. Although work experience is an essential factor, the research results show that there is no significant difference between business administration students who have working experience or not in Norway. According to the above analysis, we proposed the following hypothesis:
H5: Factors influencing students' entrepreneurial intentions between gender (male and female), majors (technical and economic) and work experience (worked and not yet worked) difference is different.
Based on the general analysis, the theoretical and practical contributions about the factors impact on students' entrepreneurial intentions above, we suggested a model of factors influence on students' entrepreneur intentions ( Figure 1).
Research methodology
We combined qualitative research and quantitative research in the implementation steps to evaluate the impact of factors affecting student entrepreneurial intentions according to the defined research model.
The qualitative research was used to help the research team explore the factors that influence students' entrepreneurial intentions and to adjust each factor to suit the startup situation in Vietnam.
Quantitative research is processed using a questionnaire based on the established research model through the method of surveying students' opinions. The collected data is the basis for assessing the quality of the scale, testing models and research hypotheses. Data analysis techniques such as Cronbach 'Alpha, Exploratory Factor Analysis (EFA) with the support of SPSS software version 22.0, to evaluate the quality of the scale. Meanwhile, to test the model and research hypotheses, the research team used Chartered Financial Analysis (CFA) and Structural Equation Modeling (SEM). The research model with dependent variables is start-up intention and independent variables are the influencing factors that will accurately indicate the impact of these factors on the start-up intention of Vietnamese surveyed students.
Scale
The scale used includes the observed variables (questions) divided into groups related to the factors that are expected to influence the entrepreneurial intentions of Vietnamese students as shown in the model in Figure 1. In particular, The author decided to use the concept and scale of Leong (2008) with 9 observed variables to measure Entrepreneurial Intention. In the same way as many authors have studied, the Need for achievement scale was evaluated with five observed variables of Leong (2008). ). Similarly, Locus of control is measured by three observed variables of Indarti and Krinstiansen (2003). Self-Efficacy is measured with 4 observed variables developed by Cassar and Friedman (2009). Finally, the Instrument Readiness scale is measured by six observed variables of Leong (2008).
Sampling
The study consists of two steps: preliminary research and official research. In particular, preliminary research includes qualitative research and quantitative research, the second step is conducting official quantitative research. Specifically, in qualitative preliminary research, the author conducted in-depth expert interviews and focused group interviews to complete draft scale 1 and create a draft scale 2. Next, qualitative preliminary research were conducted with 100 elements and the results obtained 168 elements (N = 80), in order to verify the reliability of the scales by Cronbach's Alpha and EFA reliability factors and eliminate observable variables that do not meet the standard target.
After that, the complete research questionnaire was put into official quantitative research at several universities in Hanoi city from January 2020 to February 2020. The overall study is the entire student of two disciplines of Business Administration Economics and Technical Faculty are studying at schools in Hanoi. The The research questionnaire consists of 27 observed variables used in factor analysis on the principle that at least every 5 elements for 1 observed variable (Bentler and Chou, 1987). Therefore, the calculated sample number is 27 * 5 = 135 elements. However, in this study, the authors intended to collect samples with a scale of 300 elements (N = 300) to increase the reliability of the study and the result was 244 elements (questionnaire). After screening and removing invalid votes, the author used 204 valid votes for official analysis.
Exploratory Factor Analysis
In order to run Confirmatory Factor Analysis (CFA) to check confidence and validity of the scale, the team had run factor analysis by EFA and Cronbach's Alpha.
The result of EFA and Cronbach's Alpha in Table 2 illustrates that Composite Reliability (CR) from the scale is all greater than 0.6, Cumulative of Variance also is greater than 50%, and both of 2 statistics meet the requirements. This result is generated from running Reliability Analysis to measure Cronbach's Alpha, and Factor Analysis to measure Average Variance Extracted.
Moreover, the result from EFA also show that 5 factors with 3 correlative groups have all observed variables loading toward to a single independent factor at the valid value greater than 0.5. Thus, all factors in the scale have Convergent Validity. On top of that, EFA analysis also show that each observed variable has its loading factor responding to only one factor, so all factors have Convergent Validity. Lastly, all Corrected Item-Total Correlation values are greater than 0.3, so the quality of the scale meets the requirement. To sum up, from the first-step analysis, variables in the research model are valid and usable for the next stage.
Testing models
After completing the assessment of the scale, we start to test the theoretical model. Output from model testing is presented in Pic. 2: Chi-square/df = 681.861; GFI = 0,909; TLI = 0,903; CFI = 0,907; RMSEA = 0,080, prove that the model highly matches the real market's data. Besides, calculated results illustrate that all relationships have statistical significance (P < 5%); only relationship between factor "Locus of Control" and "Entrepreneurial Intention" does not have statistical significance (P > 5%).
Hypothesis Testing
Hypothesis 1: By requirement p<0.05, prove that weighted means have statistical significance. In fact, through the output, p<0.001 and positive, Need for Achievement factor (H1=0.206) affects proportionally on Entrepreneurial Intention. Thus, conclude that Needs for Achievement affect positively on Entrepreneurial Intention. Hypothesis 2: By requirement p>0.05, prove that weighted means don't have statistical significance. Therefore, there is no basis to assert that Locus of control affects Entrepreneurial Intention.
Hypothesis 3: By requirement p<0.05, prove that weighted means have statistical significance. In fact, through the output, p<0.001 and positive, Self-efficacy factor (H3=0.913) affects proportionally on Entrepreneurial Intention. Thus, conclude that Self-efficacy affect positively on Entrepreneurial Intention.
Hypothesis 4: By requirement p<0.05, prove that weighted means have statistical significance. In fact, through the output, p<0.001 and positive, Instrumental Readiness factor (H4=0.108) affects proportionally on Entrepreneurial Intention. Thus, conclude that Instrumental Readiness affect positively on Entrepreneurial Intention.
Hypothesis 5: In this study, the author analyzes Structural Equation Modeling (SEM) by attribute: personal characteristics of students intending to start a business by gender, specialty and work experience. The results confirmed the factors that influence students' entrepreneurial intentions between gender groups (male and female), majors (engineering and economics) and work experience (worked and not yet worked) are different.
Conclusions and recommendations
This study has demonstrated the factors: Need for Achievements, Self-efficacy, and Instrumental Readiness that affect the entrepreneurial intention of students. Although these factors are very complex, both the overall analysis and the actual research have the results that these factors have a positive effect on students' entrepreneurial intentions in Hanoi. In particular, the factor Confidence in self-efficacy is the most powerful factor (H3 = 0.913), while the factor Instrumental readiness is the lowest impactful factor (H4 = 0.1108).
In addition, the study also clarify that there is a difference between male and female students; between students of economic majors and students of technical majors and between students who have worked and students who have not worked in assessing the impact of factors on student's intention to start a business in Hanoi.
The results of the study have suggested a number of implications to motivate students' entrepreneurial intentions.
First, universities, even the Vietnamese government, should research, select and apply training programs that emphasize entrepreneurship, which can be a good way to increase the level of intention of students to start up. The training program should also focus on the knowledge and skills to enhance students' entrepreneurial intentions or integrate more specialized entrepreneur courses in current curriculum. In order to equip the necessary skills, besides, specialized skills. In addition, the Universities should also have activities such as organizing Entrepreneur days, meeting students with young businesses so that students can be motivated to "dare to think and dare to do it" Second, universities should provide activities which can promote entrepreneurship into the program is also one of the essential goals. The universities should organize entrepreneurship competitions, start-up ideas, or networking meetings among young people who share the same ambition. The propaganda also plays a very important role in promoting intentions to start a business. In addition, each university or college should set up entrepreneur centers to create an environment for Vietnamese students to approach and implement start-up projects.
Third, universities, governments and organizations that focus on these activities, also need to consider carefully the strategies, mechanisms, and programs to enable each entrepreneur to have access to fund, information and social network. These are important factors that influence students' entrepreneurial intentions. The capital may come from personal savings, family support, credit systems and community savings funds or financial institutions and banks. In addition, it is necessary to have an information base system to support students to find and access new and precise information about entrepreneurial activities, especially information about entrepreneurship spirit because so far, a part of students do not fully understand about entrepreneurship and entrepreneurship or discrete it. Moreover, it is also necessary to create a social network of relationships and consider it an important means for entrepreneurs to reduce risk and transaction costs, at the same time, improve the ability of access to ideas, knowledge and business capital.
Fourth, from students' case, any decision to start a business should be made with caution. Whether you truly dare to face the responsibilities, risks, challenges of a leader or simply, you want to be a normal employee in a famous company. The results of this study reinforce the argument that the bravery and intention to start a business will greatly determine the student's perceptions and success rate of the project.
Fifth, students should actively participate in some activities such as start-up workshops, education and training. Thereby, it can promote the entrepreneurial spirit of students and make them not only well-prepared to become good employees but also qualified entrepreneurs.
In short, based on the results of surveys, this research provides some new, more specific findings on the determinants affecting students' entrepreneurial intention. These findings can help universities, governments and organizations find solutions to promote the entrepreneurial intention of Vietnam's students, developing entrepreneurial movement in Vietnam. | 2020-06-04T09:04:47.573Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "f4ec452c6c430d8934556067757f3e76174daf2b",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/EJBM/article/download/52891/54651",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "208aa37c744056c8c67194d9bbf3c0f579929278",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
16013022 | pes2o/s2orc | v3-fos-license | Proper Time Formalism, Gauge Invariance and the Effects of a Finite World Sheet Cutoff in String Theory
We discuss the issue of going off-shell in the proper time formalism. This is done by keeping a finite world sheet cutoff. We construct one example of an off-shell covariant Klein Gordon type interaction. For a suitable choice of the gauge transformation of the scalar field, gauge invariance is maintained off mass shell. However at second order in the gauge field interaction, one finds that (U(1)) gauge invariance is violated due to the finite cutoff. Interestingly, we find, to lowest order, that by adding a massive mode with appropriate gauge transformation laws to the sigma model background, one can restore gauge invariance. The gauge transformation law is found to be consistent, to the order calculated, with what one expects from the interacting equation of motion of the massive field. We also extend some previous discussion on applying the proper time formalism for propagating gauge particles, to the interacting (i.e. Yang Mills) case.
Introduction
The sigma-model or renormalization group approach to string theory [1][2][3][4][5][6][7] has shown some promise as an alternative to string field theory for doing nontrivial calculations. It is hoped that it will be computationally simpler and that the physical significance of the symmetries be more transparent than in string field theory [8][9][10][11][12]. In the renormalization group approach to string theory there are two outstanding issues. One is that of gauge invariance and the other is that of an off-shell formulation. These two issues are, of course, intertwined because, in general, maintaining gauge invariance offshell is more difficult than when on shell. The issue of 'massless' U(1) gauge invariance in the on-shell sigma model formalism has been discussed some time ago [13,[28][29][30]. Gauge invariance associated with the massive modes is a little more complicated and requires the introduction of an infinite number of 'proper time' variables and was discussed in ref [14]. The discussion of gauge invariance in these papers has been restricted to linear gauge transformations determined by the invariance of the free theory. At the interacting level the situation is a lot more complicated, especially off-shell. A discussion of these issues in the BRST formalism is contained in Ref [27].
In ref [15] a study of the gauge invariance of the interacting theory was initiated. We derived, in the proper time formalism [16,[21][22][23][24][25][26] (which is really a variant of the renormalization group approach), the covariant Klein Gordon equation. It was shown that the technique works equally well for point particles as well as strings. For point particles it is exact, while for strings it is derived as a low energy approximation. The usual gauge invariance at the massless level, δA µ = ∂ µ Λ, arises as a freedom to add total derivatives to the two dimensional world sheet action. It was shown that this is no longer a symmetry when interactions are present because of boundary terms. These boundary terms can be cancelled by an appropriate transformation of the Klein Gordon scalar field δφ = iΛφ. This enables us to understand how the 'interacting' terms in gauge transformations arise in this formalism. (By 'interacting terms' we mean those that are required specifically by the interacting theory). We had also discussed a possible generalization of the proper time formalism when dealing with the propagation of gauge particles. As a first application, we gave (yet another!) derivation of the Maxwell equations.
We would like to extend the results of that work in two directions. One is to study the effect of keeping a finite cutoff on the world sheet. The motivation for this goes back to Ref [17], where it was argued that in order to go off-shell, in this formalism, one needs a finite cutoff. In the language of the renormalization group this is equivalent to the statement that when the cutoff is finite (w.r.t. the correlation length) one is away from the fixed point (i.e. off-shell) and this is also where the irrelevant operators (i.e. vertex operators for off-shell massive modes) are no longer irrelevant. Thus, in this paper we construct an off shell vertex coupling the photon to two scalars. There is an obvious consistency check. The equation of motion one derives should be an off-shell version of a gauge covariant Klein Gordon equation, and should reduce to the usual one on shell. 1 We should point point out that the vertex constructed is not unique -there are many other possibilities. Other constraints have to be imposed to single out one choice.
We then proceed to the next order, which is the cubic term in the covariant Klein Gordon equation -involving two vector fields. Interestingly enough, we find that one can restore gauge invariance (at least to lowest order) by adding a massive "spin-2" particle with an appropriate gauge transformation law. Thus, gauge invariance can be maintained with finite cutoff if we have nontrivial backgrounds for the massive modes. We can also check whether the gauge transformation law for this massive mode is consistent with its (interacting) equation of motion. We find that this is so, at least, to lowest order in momentum. Assuming these results continue to hold to all orders in the cutoff, which will require adding an infinite tower of massive modes, we can say, that, in a sense, keeping a non zero world sheet cutoff is equivalent to keeping all the massive modes. We do not find this statement surprising. Given the renormalization group interpretation, it is to be expected. Nevertheless, we find it very interesting that it is being derived in a completely different way, with no reference to the renormalization group, whatsoevser. In ref [14], however, we speculated that this is true for a non zero space time cutoff (rather than just the world sheet cutoff). But we have no calculations to support this speculation, yet.
We also extend the results of ref [15] in another direction. We study the Yang-Mills vector field and use the generalization of the proper-time method to derive its equation of motion (to lowest order in momentum). This is only a vindication of the method described there for gauge particles, since the result is standard. What would be non trivial is to extend this to a finite cutoff off shell vertex. This paper is organized as follows: In Section II we present a brief review of the derivation of Maxwell's equation and then derive the Yang Mills case. In Section III we discuss the off shell three point vertex of the covariantised Klein Gordon equation. In Section IV we look at the four point vertex and study the gauge covariance properties. We conclude in Section V. Details of a calculation is given in an Appendix.
Equation of Motion for Gauge Fields
The renormalization group equation can be rephrased as a 'proper time' equation of the form: where O(z)is a vertex operator. This equation says that O(z) has dimension one. Since z = e τ +iσ where, for open string vertex operators, we have to set σ = 0, this equation takes the form of a proper time equation familiar for point particles. In Ref [15] we used this to derive the covariant Klein Gordon equation. The expectation value was evaluated with a weight e 1 2 d 2 z∂zX∂zX+ dzAµ∂zX µ where A µ is a background gauge field and O(z) = φ[X(z)] is the scalar field. We refer the reader to Ref [15] for the details. If instead of φ[X(z)] we want to consider a gauge field, then O(z) = A µ [X(z)]∂ z X µ . However this naive substitution is not satisfactory since we need to ensure that the result is invariant under A µ → A µ + ∂ µ Λ. The solution proposed in ref [15] is to replace O(z) by dzO(z) = dzA µ ∂ z X µ . This is invariant, because under a gauge transformation A∂X changes by a total derivative which vanishes when integrated -asssuming that there are no boundaries. However, in this case the operator d dln(z−w) is replaced by the functional derivative δ δΣ(z−w) where Σ(z − w) =< X(z)X(w) >. Thus Σ(z − w) is treated as a field, rather than as a function. 2 Thus we have the operation The coefficient of A ν gives the equation of motion. To obtain the linear term in the equation of motion one does not need any insertions of dzA µ ∂ z X µ from the sigma model action. So we get: Thus we get Maxwell's equation. Note, also, that the expression in square brackets in (2.5) is in fact the Maxwell action, (from which (2.6) is obtained by varying w.r.t A µ ) except for a factor of 1/2. This factor of 1/2 is important in the Yang Mills case where we are concerned with the relative normalization vis-a-vis the cubic and quartic terms in the action. Thus, the proper time formalism gives the equations of motion of Yang Mills theory (as we shall see), and not the action. To reconstruct the action, one will have to put in by hand these factors of 1/2 , 1/3 or 1/4 (in the cubic and quartic terms respectively).
The crucial point to note is that we are able to integrate by parts on the variable 'w' since we have an integral dw. This concludes our review.
We now turn to the Yang Mills case where we would like to get the cubic and quartic pieces in a manifestly Lorentz covariant fashion. This calculation can be simplified by the following identity (proved in the Appendix) (2.7) We have written the LHS as a sum of two terms, one of which, involving F σµ = ∂ σ A µ − ∂ µ A σ ,is manifestly gauge invariant (under A µ → A µ + ∂ µ Λ) and the other one is a total derivative that vanishes if there are no boundaries. Furthermore the first term on the RHS can be written as if we remember to extract the piece that is linear in k 1 . We can map the upper half plane to a circular disc so that the vertex operators are attached to the circular boundary.
To calculate the cubic term, we have to bring down one factor of A∂X from the exponent. Thus we have to consider In correlations we will assume that there is a trace over the matrix indices. Using the notation of ref [14] we will write A µ (X) = dk 0 dk 1 k µ 1 Φ(k 0 , k 1 )e ik 0 X and for simplicity we will omit the integrals and the field Φ when we write correlations. Thus (2.9) becomes : (2.10) In performing the above calculation we must keep in mind that there is a path ordering, implicitly, for the matrices. Thus the ordering of the matrices inside the trace follows the ordering of the three points u, w and z along the circle. Since the vector bosons are bosons and the resultant interaction is symmetric under permutations, we can restrict ourselves to a particular ordering while evaluating expressions like (2.10), and mutiply the result by a combinatoric factor, which is equal to the number of permutations. In the cubic case there is only one insertion, so there are no such factors.
We now substitute the RHS of (2.7) for each of the three factors inside the correlation in (2.10). Amongst the many terms that arise, is the following one: There are also two other terms of this type obtained by interchanging k ↔ q and p ↔ q. (2.12) now becomes (the α, βandγ integrals are understood): (2.13) We are interested in the dependence on Σ(z − u) in the above expression. 3 To obtain the cubic coupling of Yang-Mills theory we only need to keep the lowest order terms in (2.13). In particular the second term inside the square brackets does not contribute anything, to this order. We can also ignore, to this order, the exponential factors. Thus we get (Σ(z − u) ≡ Σ): Restoring all other factors from (2.9),(2.10) we get (2.15) Note that we have integrated by parts on z. In evaluating the matrix trace, we have used antisymmetry in 'a' and 'b'. The coefficient of A µ (k 0 ) gives the quadratic piece of the Yang-Mills equation of motion. The full symmetry between k, p, and q is manifest when we include the two other terms from eq(2.10) mentioned earlier. As in the case of the free Maxwell case discussed earlier, we see the full cubic term of the Yang-Mills action in (2.15) (we also have to add the two other terms necessary for symmetry). Again, as before we have to include a factor of 1/3 to get the right normalization.
The quartic term can similarly be obtained by calculating: The 2! in the denominator comes from expanding the exponential and the 2! in the numerator comes from having chosen a particular ordering , w > v in (2.16). In the notation of ref [14] this becomes: Since we are only interested in the quartic Yang-Mills piece we can set all the momenta to zero. In that case we get on doing the 'v' integral: Consider the term involving X ρ (w). We can rewrite the region of integration (and still preserve the ordering) as dw du u w dz. This gives Performing the z integral and evaluating the correlation gives: Since we can integrate by parts on u or w, we can see that (2.20) is antisymmetric in σ, µ and ν, ρ indices respectively. Antisymmetry in σ, µ indices implies an antisymmetry in a, d indices. Thus, restoring the group theory factors, we get: The second term in (2.18) gives a permutation of this. Thus we get (rexpressing in terms of A): (2.22) As before, the coefficient of A(p 0 ) gives the contribution to the equation of motion, and we can recognize the Yang-Mills structure (after appropriate symmetrization) . Also as before, we can recognize in the square brackets the quartic term of the Yang-Mills action. As mentioned earlier, in going from the equation of motion to the action, we have to divide by 4 in order to get the right normalization. The action that gives the above equation of motion is The rest of the terms in (2.10) and (2.17) represent higher order string corrections to the above action. Thus we have demonstrated a method for dealing with interacting gauge particles in the framework of the proper time formalism. The crucial point is to treat the propagator < X(z)X(w) > as a field (Σ), and keep the integrals over the coordinates z and w. This allows one to integrate by parts when performing functional differentiation w.r.t Σ. In this paper we have stayed close to the mass-shell. To go off mass-shell in a manifestly gauge covariant way requires a further extension of these methods. In the next section we will address this problem for the simpler case of the Klein-Gordon equation.
Going Off-Shell with a Finite Cutoff
If one evaluates with the interaction A µ ∂ z X µ dz in the sigma model action, one gets an equation of motion for φ that has arbitrarily high powers of A µ . There are many ways of understanding this. In terms of the renormalization group, A µ ∂ z X µ dz is a marginal operator (at least for small momenta). In the continuum limit, near a fixed point, the β-function can have terms with arbitrarily large powers of the (marginal) coupling constant and these terms have no suppression factors on dimensional grounds. In terms of Feynman diagrams in string theory, one is looking at tree diagrams with external massless fields, of which, there can be any number. In terms of the equations of motion, the massive fields can be solved for in terms of massless fields, and can be eliminated from the equations.
If one wants to be sensitive to the coefficients of the irrelevant operators and not just the marginal ones, then one should look at distance scales on the order of the under lying lattice cutoff. In this situation the β-functions are finite degree polynomials and involve all the coupling constants. This is the case when one is far from the fixed point. In string theory terms, one is off shell and the massive modes are important and the equations of motion involve all the fields, not just the massless ones. The equations are expected to be polynomial in such a situation. This is the situation described by the cubic string field theory vertex [11].
Thus at the moment we have two extreme situations-the quadratic equations of string field theory and the infinitely non polynomial equations of the sigma model approach. It seems plausible that one should be able to interpolate between these two extremes. The renormalization group interpretation suggests that by varying the value of the underlying cutoff one can modify the degree to which heavy fields are "integrated out". In ref [16] we showed how this could be done in the proper time formalism: When evaluating eqn. (3.1) we introduce a lattice spacing 'a' and require that it be the distance of closest approach between two vertex operators. The parameter z/a determines how many vertex operators can be inserted between φ[X(z)] and φ[X(0)] and thus the degree of polynomiality of the equation. In particular, if z = 2a one can insert only one vertex operator and the equations are purely quadratic. If z/a → ∞ we get a polynomial of arbitrary high order -the sigma model situation. Thus for different values of z/a one gets different sets of equations. One expects that if any of these sets of equations is expressed completely in terms of massless fields, by eliminating the massive ones, we will end up with the equation obtained in the z/a → ∞ case, i.e. the sigma model case. However we do not yet have a proof of this. We also do not have field theory Feynaman rules for calculating higher n-point functions from a given set of lower n-point vertices and propagators. Without such a prescription we cannot really check the consistency of our formulation. This is an important issue that we hope to address in the future. Meanwhile, however, there is one basic consistency check that can be done, and that is to check gauge invariance. We can require that our equation of motion be gauge covariant and that it reduce to the covariant Klein Gordon equation on-shell.
Before we do this, let us note that in order to make contact with critical strings one should be careful about group theory factors. However, in this work we will just discuss a charged string in some background U(1) field. The problem of a charged string moving in an electromagnetic background has been discussed in ref [28]. The point of deviation in our discussion is that we will be keeping a finite cutoff in order to go off-shell. Thus the electromagnetic field can have any momentum dependence.
Thus, following ref [16], we consider the proper time equation This is the same as eqn (3.1) (with one insertion of A) where φ[X(z)] has been chosen to be e ik ′ X(z) . The factor z 2 in eqn.(3.2) is needed to produce the second term in eqn.(3.1). Before we proceed to evaluate (3.2) let us evaluate a simpler quantity, namely, the gauge transformation of (3.2) under A µ → A µ + ∂ µ Λ. If we replace A µ by A µ + ∂ µ Λ in the covariant Klein-Gordon equation we get 2∂ µ Λ∂ µ φ + ∂ 2 Λφ to lowest order. We would first like to make sure that we reproduce this. We get: where we have gone over to the momentum representation for A and Λ.
This gives On-shell, setting k.k ′ + 2 = q.k = q.k ′ = 0 in the exponents, we get the leading order 4 contribution: where we have used k + k ′ + q = 0. What it should give on-shell is, of course, (q 2 + 2q.k)Λ(q)φ(k) = 0, which is just the change, under δφ = iΛφ, of the Klein Gordon equation. We can get rid of the factor z z−a by replacing the factor z 2 in eq. (3.2) by z(z − a). We will do this from now on. In the limit a → 0, this does not make any difference. But for finite a, the proper time equation is modified. Thus, we evaluate (3.2) with A µ [X(u)] = dqA µ (q)e iq.X(u) and z 2 replaced by z(z − a) and find the result The last two terms vanish when the photon is on-shell(i.e. ∂ µ F µν = 0). We also note that the appearance of a pole at k ′ .q = 0 is deceptive , since the pole terms cancel out. If we choose x = 2, the integrals vanish, and we get In this case we can also let a → 0 or x → ∞. Thus, the main result of this section is the contribution to the Klein Gordon equation given by (3.7) The integrals can be expressed in terms of hypergeometric functions, but we will not do so here. On the face of it expression (3.7) looks like yet another 3-pt. vertex that should be obtainable from some string field theory [11,12]. However, when we study (3.2) and (3.7) we see a difference. In (3.2 there is an integral over u, the location of A µ . The vertices considered in the literature thus far always have the three vertex operators in specific locations. In the special case of z = 2a these integrals (in eqn. (3.7)) vanish, and we have well defined locations for the vertex operators.
We also have a rule that 'a' is the minimum spacing between two vertex operators. Thus, for instance, when we consider the gauge transformed kinetic term in the equation of motion, we get The important point is that and not δφ[ In fact, we have seen that when we substitute A µ → A µ + ∂ µ Λ we get eqn. (3.3) (with z 2 replaced by z(z − a)), which is identical with (3.10) . Thus it is the transformation law (3.11) that is consistent with the gauge invariance of (3.7). However, until we have a full formulation that spells out the precise relation between the (N+1)-point function and the N-point function we cannot claim that this rule of keeping a minimum spacing 'a' between vertex operators is consistent. If we choose z = 2a, then the equation of motion is quadratic in the fields. In this case the variation δφ = −iΛφ of the Aφ piece has to be cancelled by the variation of a massive mode rather than by the variation of the A − A − φ piece as in point particle field theory. If z/a > 3 one can have two or more powers of A. In this case one expects a cubic A − A − φ term in the equation of motion. The gauge transformation property of this term is the topic of the next section.
There are also other modifications that are possible. One can imagine doing the same calculation on a disc where cyclic symmetry is manifest. This would be similar to what is done in ref [18,19,20]. In this case the equation of motion would be different.
Our main aim in this paper is to explore the issues that arise in keeping a finite cutoff -particularly issues of gauge invariance. We should keep in mind that geometries other than that used in eq (3.7) are possible. The results of ref [11,12], in fact, suggest that manifest cyclic symmetry is very important for the full gauge invariance of the theory.
Gauge Invariance and Finite Cutoff
We have been using gauge invariance as a consistency check on the off-shell terms in the Klein Gordon equation. At the next order in A µ something interesting happens: effects of a finite cutoff show up even at the classical level (i.e. before doing the functional integral over X(z)). To see this, consider the second order term in the proper time equation: When we perform a gauge variation we get: Note that we have consistently imposed the rule that 'a' is the distance of closest approach between two vertex operators by appropriate choice of the limits of integration in (4.2). Notice that the first two terms in expression (4.2) can be cancelled in the usual way by a variation δφ = −iΛφ on the first order term considered in the previous section. The other two terms, however, remain to be cancelled. The third and fourth terms in (4.2) would cancel if'a' were equal to zero. Their sum is thus proportional to 'a'. and can be written as the sum of three terms: The first term is what we get by modifying the limits of integration of the third and fourth terms in (4.2) so that they both go from a to z − a. The remaining terms (of (4.3)) compensate the O(a) errors that arise from such a modification (of limits). Each of the terms in (4.3) can, in turn, be expanded in powers of 'a'. The lowest order terms are : (4.5) The first term in (4.5) is a 'bulk' term and the second one is a boundary term. The first term, in fact, looks like the vertex operator for a massive mode. Thus, consider a massive field S µν (see eq. (4.8)) with the transformation law: We could add the background field a S µν ∂ u X µ ∂ u X ν du to the sigma-model action, (along with A µ ∂ u X µ du) and the gauge variation of the first order contribution to the proper time equation: would cancel the first term in (4.5). The boundary term in (4.5) can also be cancelled as follows: Consider, again, the second mass level in string theory. In addition to S µν , there is an auxiliary field S µ , and the complete vertex operator is [14]: and the usual 'free' gauge transformation of string theory is [14]: Under these transformations we can cancel the second term in (4.5). Thus, we conclude that, at least classically, and to lowest order, gauge invariance can be restored by the addition of massive modes with an appropriate gauge transformation law. In fact, one can even say, based on the above, that imposing the U(1) gauge invariance associated with the massless vector , on a theory with finite world sheet cutoff requires the presence of massive fields. We find this very interesting. Now, the validity of the Taylor expansion, when quantum mechanics is turned on, needs to be checked. We go back to (4.3) and consider the first term. Perform an operator product expansion on each piece separately: A µ (p) : e ipX(u) ∂ u X µ (u) : Λ(q) : e iqX(u+a) := (4.13) Subtracting (4.13) from (4.12), we get for the first term in (4.3): This corresponds to the operator (If we assume that a normal ordering factor a p 2 2 accompanies each exponential e ipX , then, the factor a pq in (4.15) is part of the normal ordering associated with e i(p+q)X ). The first term in (4.15) is what we got by the Taylor expansion in eqn. (4.5). The second term is a normal ordering effect -it is a contraction of the two ∂ u X's. It corresponds to a tachyon like operator and thus represents a gauge transformation on the background tachyon field : We can do a similar analysis for the boundary terms and one finds that in addition to the terms in (4.5) one needs a term of the form: This is clearly a U(1) gauge transformation of A µ itself with parameter A µ ∂ µ Λ. We have thus shown that the boundary terms can be compensated by usual linear gauge transformations of S µν , S µ andA µ merely by redefining the gauge parameter. 5 Thus we conclude that (to this order) the theory is gauge invariant with a finite cutoff provided we modify the gauge transformations of the massive spin 2, the vector, and of the tachyon in well defined ways.
There is one consistency check that can be quickly performed. One can check whether the non-linear pieces introduced in the gauge transformation law of S µν , φ T are consistent with their interacting equations of motion. Obtaining the fully covariant interacting equations of motion is a complicated story. This has been done in some detail in [30], in the case where the electromagnetic field is slowly varying in space-time, which is all we need to lowest order. One can also write down the leading pieces (lowest order in momentum) just by considering the OPE of A µ ∂ z X µ with itself: A µ (p) : ∂ z X µ (z)e ipX(z) : A ν (q) : ∂ z X ν (z + a)e iqX(z+a) := A µ (p)A ν (q) : ∂ z X µ (z)∂ z X ν (z + a)e ipX(z)+iqX(z+a) :| a | p.q + A.A 1 a 2 : e ipX(z)+iqX(z+a) :| a | p.q +higher order in p, q (4.18) Thus to lowest order in momentum: Thus the equation of motion of S µν , which starts out as ( p 2 2 − 1)S µν gets modifies to and the equation for φ becomes : Both these, (4.20) and (4.21) are consistent with the modifications (4.6) and (4.16) respectively. Thus we conclude that keeping a finite cutoff while retaining gauge invariance forces one to introduce background massive modes, and to modify their transformation laws. These modifications are consistent with what one expects from their interacting equations. We expect that at higher order in 'a' one will need other massive modes as well. Now, all this is not surprising. A string is a non local object, (but with local interactions) which is why we have an infinite number of point particles. A finite cutoff makes the theory non-local, albeit in a crude way. Requiring gauge invariance is a way to make this more refined and 'string'-like. This requires massive modes. One can also argue, as in the introduction, from the viewpoint of the renormalization group, that keeping a finite cutoff entails retaining all the irrelevant operators that correspond to massive modes. Thus there are different ways to understand or rationalize these results. Nevertheless we think that deducing the existence of massive modes and their transformation properties from the requirement of ordinary ('massless') gauge invariance, is very interesting.
Conclusions
In this paper we have investigated two interrelated topics i)gauge invariance at the interacting level , ii) keeping a finite cutoff and going off-shell. We have a technique for dealing with gauge particles in the proper-time framework. This was an extension to vector-vector interactions of the results of ref [15] for free gauge particles. We do not yet know how to do extend this to an off shell calculation. In section 3 we discussed the simpler version of the above problem: going off-shell with a finite cutoff in the case of the covariant Klein Gordon equation. We presented one possible form of the interacting term that satisfies some basic properties of gauge invariance and of having the right on-shell limit. There are other solutions possible. In particular, if one does the same calculation, but on the boundary of a disc, one will have manifest cyclic symmetry. In order to proceed further, one needs a prescription for going from 3-point functions to 4-point functions or higher n-point functions. This, we think, is the most pressing issue in this approach. In sec. 4, in studying the 4-point function directly, we discovered that a nonzero cutoff, along with the requirement of gauge invariance, predicts not only the existence of masssive modes, but also the right transformation law. We find this promising. It would be interesting to extend these results to all the massive modes and higher invariances. Finally, on a more speculative note, the idea of a finite world sheet cutoff has to get translated to a finite space-time cutoff. | 2014-10-01T00:00:00.000Z | 1994-09-04T00:00:00.000 | {
"year": 1994,
"sha1": "8020687de875a07602dab472f4c7f52b777b45e0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9409023",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8020687de875a07602dab472f4c7f52b777b45e0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257866089 | pes2o/s2orc | v3-fos-license | Tungiasis Stigma and Control Practices in a Hyperendemic Region in Northeastern Uganda
Neglected tropical diseases are known to be highly stigmatized conditions. This study investigates tungiasis-related stigma and control practices in the impoverished Napak District in rural northeastern Uganda, where tungiasis is hyperendemic and effective treatment is unavailable. We conducted a questionnaire survey with the main household caretakers (n = 1329) in 17 villages and examined them for tungiasis. The prevalence of tungiasis among our respondents was 61.0%. Questionnaire responses showed that tungiasis was perceived as a potentially serious and debilitating condition and that tungiasis-related stigma and embarrassment were common. Among the respondents, 42.0% expressed judging attitudes, associating tungiasis with laziness, carelessness, and dirtiness, and 36.3% showed compassionate attitudes towards people with tungiasis. Questionnaire responses further indicated that people made an effort to keep their feet and house floors clean (important tungiasis prevention measures), but lack of water was a common problem in the area. The most frequent local treatment practices were hazardous manual extraction of sand fleas with sharp instruments and application of various and sometimes toxic substances. Reliable access to safe and effective treatment and water are therefore key to reducing the need for dangerous treatment attempts and breaking the vicious cycle of tungiasis stigma in this setting marked by poverty.
Introduction
Tungiasis is a neglected parasitic skin disease caused by the sand flea Tunga penetrans. It is widespread throughout sub-Saharan Africa and South America, particularly among the most marginalized populations [1,2]. Female sand fleas burrow into the skin of humans and animals, usually in the feet, where they grow and shed eggs onto the ground [1,3]. A risk factor analysis from Kenya identified regular washing of the feet with soap as well as frequent cleaning of house floors as protective factors, and consequently, lack of access to water and soap as a risk factor for tungiasis [4]. To date, the only safe tungiasis treatment proven to kill sand fleas is topical application of dimeticone oils (NYDA ® ), which seal the respiratory and reproductive systems of embedded sand fleas [5][6][7][8]. However, dimeticone oils are hardly ever available to affected communities. In absence of effective treatment, people commonly turn to extracting sand fleas manually with inadequate, non-sterile instruments, like thorns or safety pins [5,9]; a painful and hazardous method which can lead to serious infections and mutilations [5,10].
Population-based studies have shown high prevalence of tungiasis, for example, 25% in rural Kenya [4], 45% in rural Nigeria [11], and 43% in Brazilian fishing villages [12], and very severe cases with hundreds of sand fleas have been described in Tanzania, Colombia, and Madagascar [2,13,14]. In Napak District, rural northeastern Uganda, our study team recently found extremely high tungiasis prevalence of 62.8% [15]. Despite its endemic presence in many countries, tungiasis has been neglected by healthcare professionals and researchers alike [1,16].
Neglected tropical diseases (NTDs) are known to be highly stigmatized, not least because of their association with poverty [17][18][19][20][21][22]. NTDs are also understudied, and this is even more true for NTD-related stigma [18,21], with the exception of widely studied leprosy stigma [17,[23][24][25][26]. Disability-adjusted life years (DALYs) and economic impact of specific NTDs have been useful to demonstrate disease burden, but more attention needs to be paid to the problem of social stigma [21].
Stigma reduces life opportunities, exposes those affected to discrimination, and negatively affects mental and physical health [27]. Health-related stigma has been described as "a social process or related personal experience characterized by exclusion, rejection, blame, or devaluation that results from experience or reasonable anticipation of an adverse social judgment about a person or group identified with a specific problem" [28] (p. 280). In relation to NTDs, stigma not only adds significant psychological suffering to physical and economic hardship, but the resulting social isolation can further trap those affected in a cycle of poverty [13,18]. Neglected diseases are, in fact, diseases of neglected people [29,30]. This calls for a bio-social approach that goes beyond drug administration and takes into account local practices and power dynamics [31], thus highlighting the social factors that contribute to continued morbidity [32].
This study aims at developing such a bio-social approach to tungiasis control and stigma in the impoverished Napak District in northeastern Uganda, where tungiasis is hyperendemic. The aim of this study was to investigate local attitudes and control practices regarding tungiasis, with a focus on stigma. We ask:
•
What are people's attitudes towards tungiasis? • How does stigma play out when a condition is so common that most people are affected? • How does stigma relate to local tungiasis control practices?
Study Design and Context
This article presents stigma-related data derived from a household-based knowledge, attitudes, and practices (KAP) survey (Supplement S1), conducted between February and September 2021.
This study took place during the initial phase of a two-year long tungiasis control intervention project (2021/2022), the results of which are published elsewhere in this issue [15]. The larger intervention aimed at eliminating tungiasis as a public health problem in the area and included regular screening of the population for tungiasis; treatment of cases with a mixture of two dimeticone oils (NYDA ® ); and tungiasis-related health education and community engagement in Napak District, Uganda.
Our study team applied a KAP questionnaire to a representative member of each household in the study area who identified as the main household caretaker. We included 1329 out of a total of 1338 households in the study area (nine did not consent). Households included in the study presented here had not been exposed to the intervention at the time of data collection, i.e., they had not previously been examined for tungiasis and had not received tungiasis-related treatment or health information from our team.
Study Site and Population
The study took place in 17 villages located in three of the five parishes that constitute the Ngoleriet Subcounty in Napak district, Karamoja region, northeastern Uganda. Based on the national census in 2014 and local population growth rates, the total population of Ngoleriet Subcounty in 2021 has been estimated at 13,400 [33]. This rural study area was chosen because the District Health Office had alerted our study team to the high occurrence of tungiasis in the area. In a fact-finding pilot study in November 2020, our team confirmed that tungiasis was highly prevalent in the three parishes, using rapid assessment for tungiasis [34] in 11 villages. The pilot study showed a tungiasis prevalence of 68.5% (n = 456) among 666 examined individuals [15].
The total population of the 17 villages in our study area was 5482 individuals during the project's baseline evaluation in February/March 2021 [15]. They belong to the semi-nomadic Karamojong ethnic group living in small villages which were further compartmentalized into manyatas, groups of houses surrounded by stick fences as protection against wild animals and animal raiders. Karamoja has a long history of animal raids and ethnic conflict and has long been a marginalized region in Uganda with widespread poverty [35]. Houses in the study area were predominantly made of sticks with grass roofs and earthen floors, which were sometimes smeared with cow dung to harden and smoothen the surface. Living conditions in the study area were generally very poor, and hunger and malnourishment were common (unpublished observation, F.M. and M.B.). Access to water was limited, as the few existing boreholes and shared water taps were located at distances up to 3 km from people's homes and were prone to breaking. Traditionally, Karamojong men are cattle herders [35], and only few animals were kept in the villages, while most were taken to other places for grazing and to protect them from raids. Women usually stayed in the villages when men were away herding the animals. In addition to cattle herding, small-scale crop farming and low-wage day labor were common sources of income. The local population had very limited access to formal medical care, as health units were understaffed and located far away from the villages.
Data Collection and Analysis
Data was collected by Village Tungiasis Health Workers (VTHW), who had been trained over seven days particularly for this study and the tungiasis intervention program. They were supported by Village Health Teams (VHT) and local village leaders who mainly worked as mobilizers. The VTHW consisted of bi-lingual, literate individuals local to the study area who communicated with the respondents in Ngakarimojong language and recorded their answers in English. They were accompanied by members of our research team (F.M. and M.B.) as well as a study nurse and a social worker who assisted them in using mobile phones to record questionnaire data in ODK collect, an open-source digital Android app.
Our team designed the KAP questionnaire specifically for the tungiasis project. It consists of 50 questions, including binary questions, questions with multiple answer options, and open-text questions. Questions with pre-defined response options were not read aloud to respondents to avoid influencing their answers. Instead, the data collector chose the response category that best fitted the given answer. Responses to open-text questions were summarized and entered into an open-text box by the data collector. The questionnaire also asked about the sociodemographic information of our respondents.
Following the questionnaire interview, VTHWs physically examined the participants for tungiasis on the feet and other potentially exposed body sites. Tungiasis cases were treated topically with dimeticone oils (NYDA ® ).
Questionnaire data and clinical data about tungiasis infection were transferred to Microsoft Excel (2016) and double-checked for consistency. For this article, we purposely selected 11 relevant questions from the KAP survey that related to stigma and tungiasis control practices (Supplementary S1). One of these questions was from the "knowledge" section of the questionnaire, four from the "attitudes" section, and six from the "practices" section. We performed statistical analysis in Microsoft Excel (2016) and SPSS (IBM SPSS Statistics Version 25). Open-text questions were thematically analyzed by clustering similar answers (coding) and defining labels for the resulting thematic categories [36]. This approach allowed us to subsequently quantify frequencies of the identified response categories.
Ethical Considerations
Informed written consent was taken from each study participant. The VTHWs explained the aims and methods of the study in simple words in Ngakarimojong language, and individuals had time to ask any questions. Refusal to participate in the survey did not affect the right to be treated for tungiasis. Study participants gave consent by signing a form that was read out to them or by providing their fingerprint if they could not write their name. In the case of minors (under 18 years of age), both the minor and an adult caretaker were asked for informed written consent. The questionnaire interviews and physical examinations were conducted in a place chosen by the respondent.
Ethical approval for this study and the intervention program was given by the Vector Con- trol
Results
Our 1329 questionnaire respondents represented their households as the family members who took on most of the caring responsibilities in the household. Sociodemographic characteristics are presented in Table 1. Our respondents were mostly women (89.4%), and the median age of the respondents was 44 years (min 9 years/max 115 years). Respondents most frequently described their main occupation as casual labor (43.2%) or "none" (29.1%); and some as small-scale crop farming (14.9%); small business (9.1%); and other occupations (3.7%) that include students, employees, and others. Formal education levels among our respondents were extremely low; the vast majority (84.7%) had never attended school (Table 1). The prevalence of tungiasis among our respondents was very high. Physical examination showed that tungiasis was present in 811 (61.0%; 95%CI 58.3-63.7%) of our 1329 respondents. Among the respondents with tungiasis, the median number of lesions was 14 (min/max: 1/591; IQR 23).
Tungiasis was not only very common but was also perceived as a potentially serious and debilitating condition. When asked if tungiasis could cause severe illness, almost all respondents (n = 1293/1329; 97.3%) answered "yes". When asked if tungiasis affected their everyday life, over half of the respondents (n = 734/1329; 55.2%) said "yes", referring to their own and/or their family members' tungiasis infection. Those who had answered "yes" were asked how tungiasis affected people's lives. The most frequently described impact was difficulty to walk and work, followed by loss of appetite and weight, fear of death, and isolation/social problems ( Table 2). Table 2. Effects of tungiasis on everyday life (open-text question). Several individuals gave more than one response; therefore, the number of responses (n = 755) was higher than the number of respondents (n = 734).
Embarrassment
Despite the very high prevalence of tungiasis in the community, over half of the 1329 respondents (n = 719; 54.1%) said they felt embarrassed when having sand fleas. Examples for reported reasons for tungiasis-related embarrassment are: "[Tungiasis] brings public shame and isolation"; "They will talk and laugh about people with jiggers"; and "People begin abusing because it shows that you are not responsible towards yourself". Most frequently, responses referred to feelings of shame, fear of social isolation and stigmatization, severe pain, being ridiculed/talked about/abused, intense itching, and fear of dying (Table 3). It should be noted that these categories are not well-distinguished from each other. The category "feelings of shame", for example, may include all the other listed reasons for embarrassment, and "fear of social isolation and stigmatization" may also refer to, for example, the result of "being ridiculed, talked about, and abused" because of severe pain or itching that was difficult to hide. Table 3 thus displays various interconnected aspects of tungiasis-related embarrassment. These responses demonstrate that even in a community with a very high prevalence of tungiasis, embarrassment around the condition persists.
Association of Tungiasis with Lack of Hygiene
The KAP questionnaire further revealed that respondents frequently associated tungiasis with dirty homes and lack of bodily hygiene. When asked to name different factors that increased the chances of an individual to contract tungiasis, the most frequent answers were dirty/dusty floor in the house, mentioned by 86.5% of respondents (n = 1150); poor bodily hygiene, named by 70.5% of respondents (n = 937); and poor housing, specified by 59.7% of respondents (n = 793). In accord with the notion that lack of hygiene caused tungiasis, the most frequently named methods for tungiasis control were regular washing of the feet, named by 90.4% of respondents (n = 1202), and keeping the houses/compounds clean, mentioned by 76.2% of respondents (n = 1013). In short, our respondents perceived failure of keeping one's body and home clean as the most important risk factors for tungiasis.
Attitudes towards People with Tungiasis
Our field team asked the respondents "What do you think about people with tungiasis?", and coding of the responses resulted in the response categories shown in Table 4. We grouped these categories as "judging attitudes", "compassionate attitudes", and "other comments". "Judging attitudes" were displayed by 42.0% of our respondents and included characterizations of people with tungiasis as lazy, careless, dirty, irresponsible, and drunkards. Compassionate attitudes were expressed by 36.3% of our respondents. They included the representation of people with tungiasis as in need of help, being very sick, being elderly or disabled, being people too, as well as expressions of sympathy, such as "I feel sorry for them". Other comments could not clearly be labelled as either judging or compassionate. These included the view that people with tungiasis needed more advice on how to prevent or treat sand fleas, and other statements (Table 4). Judging attitudes were very widespread despite the very high prevalence in the community and the fact that most respondents had tungiasis themselves at the time of data collection. Members of our field team (F.M. and M.B.) clarified that having a few sand fleas was seen as normal in the villages, and that respondents will most likely have referred to severe cases of tungiasis in their responses. To investigate the stigmatization of heavy tungiasis infection further, we separately analyzed responses of individuals with 30 or more tungiasis lesions on the body, defined as heavy tungiasis infection [37]. At the point of data collection, 193 (14.5%) of our 1329 respondents had heavy tungiasis infection. As expected, they expressed more compassionate attitudes (52.9%; n = 102), primarily sympathy (21.2%; n = 41) and the need for help (16.1%; n = 31). However, even among the heavily affected, 21.2% (n = 41) displayed judging attitudes by labelling people with tungiasis as lazy (8.8%; n = 17), careless (6.2%; n = 12), irresponsible (4.1%; n = 8), and dirty (2.1%; n = 4).
Treatment and Prevention Practices
When asked how they treated tungiasis in their families, 84.1% (n = 1119) of respondents named extraction with sharp instruments like thorns, needles, pins, and razor blades, and 14.8% (n = 197) said they applied various substances (Table 5), mainly greasy products like petroleum jelly. When asked about additional treatment practices, 74 (5.6%) mentioned manual extraction of sand fleas, which raises the reported prevalence of this practice to 89.7% (n = 1193). Those who had stated manual extraction of sand fleas as the main treatment (n = 1119) were asked about details of their extraction practice ( Table 6). The majority (63.8%) said they shared their extraction instruments with others, and 53.1% said they boiled them. Use of antiseptics was reported by 14.3%, and 58.4% (n = 654) named one or several substances they applied to the wound after extraction. Hot or cold ash (n = 342) and tobacco (n = 296) were mentioned most often, together with greasy substances (n = 67) like petroleum jelly. Some also mentioned cooking oil, castor oil, paraffin, and harmful substances like used engine oil, diesel, and petrol. These substances were often applied as mixtures, for example cooking oil mixed with tobacco and ash. Furthermore, 25 respondents named herbal remedies, including aloe vera, milk bush sap, balamite, a local tree called epuu, a fruit called eome, seeds from the ekolej tree, and others. In the study area, the houses had earthen floors. People habitually walked barefoot or used rubber sandals. The great majority of respondents (93.8%) stated that they washed their feet once or several times per day (Table 7). Similarly, most respondents reported that they swept their houses and their compounds daily (83.5% and 76.1, respectively). Only a small minority said they swept their houses and their compounds less than every other day (4.8% and 10.8%, respectively). In addition to sweeping, 260 respondents (19.6%) said they had applied commercial insecticides in their houses in the past, namely the toxic substances "Dudu Dust" (carbaryl) [38] and "Supona" (chlorfenviphos) [39]. However, our field team observed that insecticides were rarely available (unpublished observation, F.M. and M.B.). Several respondents also mentioned smearing house floors with cow dung as a traditional way of keeping them smooth and clean, but as most cattle was herded away from the villages, cow dung was scarce (unpublished observation, F.M. and M.B.).
Attitudes towards Tungiasis
Our finding that most respondents perceived tungiasis as a potentially serious and debilitating condition contrasts with the reported perception that tungiasis is not an important health threat in affected communities in Northeast Brazil [16]. However, the strong impact of tungiasis infection on everyday life in our study is consistent with findings from a study on children in Nigeria, where 78% of affected children reported that tungiasis had a moderate or severe effect on their quality of life, which rapidly improved after effective treatment [4]. Impairments reported by our respondents, namely mobility restrictions, sleep disturbances, pain and itching, social isolation, and being ridiculed have been described for tungiasis before [40,41]. Similar problems have also been reported regarding other NTDs, like cutaneous larva migrans [42,43], lymphatic filariasis [44], and leprosy [25]. However, our finding that some respondents associated tungiasis with weight loss and fear of dying is remarkable and highlights the severity of the problem in Napak District. Considering that the local population lives in dire poverty [45], with hunger and malnutrition being commonplace, a parasitic infection like tungiasis can quickly become a physically and socially existential threat. This finding is corroborated by a series of extremely severe tungiasis cases in indigenous villages in the Colombian Amazon region [13], where affected individuals presented with life-threatening malnutrition, severe anemia, weight loss, and immobility and were left by their families to die in the forest.
Our study further provides insights into tungiasis-related stigma by presenting widespread judging attitudes and stereotyping labels, such as "lazy", "careless", "dirty", and "irresponsible". Similarly, a study from rural Eastern Uganda reported that 31% of their respondents characterized individuals/families with tungiasis as "lazy" and 17% as "irresponsible" [46]. However, in contrast to our findings, this study did not mention an association with dirtiness. "Dirty" is a common stigma label that particularly affects individuals and groups with poverty-associated conditions [47]. In a qualitative study in Kenya, tungiasis sufferers felt that they were perceived as dirty and disgusting in the community [48]. According to labelling theory in stigma research, the judging labels we identified are "informal labels" that affect individuals in day-to-day interactions in their community [49]. Stigma has famously been described as an "attribute that is deeply discrediting", marking a person as a "tainted, discounted one" [50] (p. 3). Stigma thus contributes to significant suffering and an often hidden burden of illness [28], and NTDs are known to be particularly highly stigmatized conditions due to their common association with poverty, physical impairment, and disfigurement [17,21]. However, stigma usually affects minority groups who are "distinguished [ . . . ] as a separate social entity" [51] (p. 462). The finding that tungiasis-associated stigma was still highly relevant in a community where the majority was affected thus deserves further attention.
Stigma in a Hyperendemic Environment
The high prevalence of tungiasis infection (61.0%) among the interviewed household caretakers corresponds with the prevalence of 62.8% among the overall population in our study area in Napak district, northeastern Uganda [15]. Tungiasis is known to be endemic in Uganda, especially in rural communities in Eastern regions, where a study found that in 22.5% of households, tungiasis infection was present in at least one household member [46]. Population-based studies from Kenya, Nigeria, and Brazil show high tungiasis prevalences of 25 to 56% [4,11,12,52]. However, communities with very high prevalences of over 60%, like in Napak district, have-to our knowledge-not been studied before.
In the studied setting, where single tungiasis lesions were seen as normal (F.M. and M.B., unpublished observation), stigmatization did not affect everyone with tungiasis in the same way. We can expect a gradual degree of stigmatization and that the judging and compassionate attitudes, together with the respective labels, more greatly affect those who bear the most notable signs of tungiasis. Similarly, a study of lymphatic-filariasis-associated stigma in Nigeria [53] showed that those with the most pronounced signs of the disease were most stigmatized.
Interestingly, judging attitudes towards people with tungiasis were also expressed by respondents who were themselves heavily infected, albeit only half as often as in the overall study population (21.2% vs. 42%). This finding might be linked to an attempt to distance oneself from the stigmatized group, in itself an expression of "felt stigma", with its two components of feeling shame and fear of discrimination [54]. Furthermore, the expressed judgmental attitudes might indicate a degree of "self-stigma", the internalization of normative stigma by those who possess stigma markers. In cases of self-stigma, "people's self-concept is congruent with the stigmatizing responses of others; they accept the discredited status as valid" [55] (p. 3).
Stigma and Tungiasis Control Practices
The vast majority of our respondents (89.7%) stated that they practiced manual extraction of sand fleas with sharp instruments, which is known as a common method in endemic communities [5,9,10]. Manual extraction is painful and dangerous as it can lead to mutilations and life-threatening bacterial infections [5,56], but in the described context, people had few alternatives. The application of hazardous substances to the affected skin (like tobacco and engine oil), which some of our respondents described, can lead to skin damage and intoxication and has been described for other NTDs in areas with limited access to medical care, for example cutaneous larva migrans in Brazil [43].
Importantly, the presented treatment and prevention practices (including hygiene measures) require various resources: water and soap to wash the feet; mobility to sweep the house and compound every day; good eyesight and fine motor skills to perform manual extraction of sand fleas; money to buy substances to apply to lesions; and the ability to prioritize hygiene and self-care in a setting where hunger and scarcity are commonplace [45]. As embedded sand fleas continually shed eggs onto the ground [3], people with the least individual and social resources who cannot keep their houses and bodies clean and remove sand fleas from their feet, will accumulate eggs and larvae in their immediate environment and hence be vulnerable to repeated and severe tungiasis infections [4]. We can assume that this will lead to a vicious cycle in which they become more and more immobilized and dependent on help from others, while at the same time they are also increasingly at risk of being stigmatized and socially isolated.
Our finding that some of the interviewed household caretakers stated that people with tungiasis "need advice" (8.0%; n = 106) suggests that tungiasis-related community sensitization might be useful to support tungiasis control in this setting. Indeed, community engagement could build on the finding that 36.3% of our respondents expressed compassionate attitudes towards people with tungiasis, such as "they need help", which indicates the presence of-or at least potential for-networks of care within the community. However, health education may have unintended consequences. Anthropologists argue that global health campaigns that foreground hygiene education and implementation often inadvertently increase "hygiene stigma" in the community, which causes further shame and isolation for those who are unable to put the set hygiene goals into practice [47]. When this happens, people might resort to manual extraction of sand fleas even more to avoid being seen as dirty.
Whilst Stigmatizing, both the common association of tungiasis with dirtiness and the social isolation of patients are, from a medical perspective, reasonable. Lack of personal hygiene and dirty/dusty floors have indeed been established as risk factors for tungiasis [4,37]. As sand fleas embedded in the skin continuously excrete eggs onto the ground [57,58], the infested soil in places where affected individuals reside or walk poses a risk of infection to others. This consideration, however, contrasts with the definition of health-related stigma as "medically unwarranted with respect to the health problem itself" [28] (p. 280, our emphasis). Although attributes like "lazy", "careless", and "dirty" carry unwarranted moral judgment about tungiasis-affected individuals, their social isolation can (at least partly) be understood as a medically warranted attempt to avoid spreading and acquiring tungiasis infection in the absence of effective and safe treatment. Health education alone is therefore unlikely to reduce stigmatization.
Access to safe and effective tungiasis treatment would make painful and dangerous treatment attempts obsolete and would presumably lower the burden of stigma and social isolation significantly. However, we recognize that "[m]ore medicines alone cannot ensure the treatment of neglected tropical diseases" [59] (p. e330). Integration of treatment into local health systems, possibly at a village level, is necessary to make it accessible to those who need it most and to ensure the sustainability of treatment success. Community engagement is an essential tool to establish understanding of treatment and prevention and build local networks of care, ideally with formalized and paid community health workers [60].
Study Limitations
We included a small number of children under 16 years of age in this study (n = 14; minimum age = 9 years), as in rare cases children lived in their own huts near other family members' accommodation. However, the questions were simple so that we could assume that the children were able to answer them appropriately. Twelve respondents reported they were over 90 years old (maximum age = 115 years), which in the presented context may simply mean "very old", as not everyone knows their exact age. Similarly, precise information about the respondents' main occupation was difficult to obtain as people had little to no school education and no formal employment, and they used various strategies at the same time to make ends meet.
The KAP questionnaire was developed based on the expertise of our team members and had not been previously used or pre-tested. Self-reported information about preven-tion and treatment practices is prone to information bias, and we did not observe if our respondents acted according to their claims. In particular, hygiene measures (frequency of cleaning/washing, boiling of extraction instruments, and use of antiseptics before manual extraction) may have been overstated.
Conclusions
Despite the high prevalence of tungiasis in Napak District, stigma and shame around the condition were common. Judging attitudes about people with tungiasis were widespread and could partly be attributed to the commonly made association of tungiasis with dirtiness. While hygiene measures are a key aspect of tungiasis control, prioritising hygiene education thus bears the risk of further increasing pre-existing "hygiene stigma", especially when water is scarce. The dangerous and painful method of extracting sand fleas with sharp instruments was very common. Making safe and effective treatment available in the community can be expected to make dangerous treatment attempts obsolete and to break the vicious cycle of tungiasis infection, impairment, stigma, and social isolation. | 2023-04-01T15:09:39.538Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "46a5378d4fe5ee672951b7da255d9ffb9eb5a08d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2414-6366/8/4/206/pdf?version=1680852705",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ff0161b5d41157edec24354d41e7fc2b76856d8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
226461583 | pes2o/s2orc | v3-fos-license | An Integrated Framework for Managing Information Technology Security Uncertainty
Information security to date has been driven a lot of attention in business world. The cyber security standards play significant and crucial role in promoting feasible approaches to organizations while making comprehensive strategical planning. This paper aims at providing a systematic overview of information technology (IT) security management in organizations. Conducted a structured literature from academic database and industry whitepapers, we review a number of the critical issues and challenges facing the industry today and in the future. In line with the fundamental elements of information security, we propose an integrated framework to understand the current situation of IT security management. In particular, we focus on several critical fundamental functions of IT security management: Security and Risk Management, Security Operations, and Security Assessments and Testing. Then, we use the proposed framework as a lens to discuss and solve the security issues in bring your own device (BYOD) in organizations.
recent media articles, and personal experiences to provide a summarizing look at the security environment and how it supports the overall business organization. We further analyze our research to posit on the problems and challenges that three of the top security issues (i.e., Security and Risk Management, Security Operations, and Security Assessments and Testing) will pose on Information Systems Management and business as a whole. Our research identified some of the most significant trends affecting the security industry are also some of its biggest challenges.
Since we selected to review the literature, we conducted a comprehensive search. As proposed by Webster and Watson (2002), this research is not limited to a specific journal and tries to cover all relevant literature. This research is focused on the published papers in peer-reviewed journals that are developed frameworks, as proposed theories for information security. This goal is pursued in two steps. In the first step, a number of keywords were identified to start the search with them. The search process started within important electronic databases, including Science Direct, Web of Science, Academic Search Premier, and white papers available in the leading practice websites (e.g. onlinetech.com). We started our search by trying "IT security" and "Framework". Using different keyword combinations of these groups, several seminal papers were found. Then we found new papers based on the seminal papers that we referenced to. In the second step, the citations in each collected paper were reviewed to identify other potential related papers.
Information Security from a Risk Management Perspective
The phrase "information security" in the business context is a broad term and has been expressed into different aspects such as technology (Li and Guo, 2007), people (Dhillon and Backhouse, 2001), and protection process (Bishop, 2003). As such, a business organization to manage information security is the process of managing ITbased risk. Given the pervasiveness of IT into every aspect of business process in the organization, IT risk has become more important in corporate risk management (Hunter and Westerman, 2007). IT risk may "damage corporate reputations and expose weaknesses in companies' management teams. Most importantly, IT risk dampens an organization's ability to compete" (Hunter and Westerman, 2007).
Principles of Information Security
Information security is required through a set of processes that contain policies, standards, mechanisms, governance, and practices. All of these controls aim to achieve three core goals of information security: Confidentiality, Integrity, and Availability. The three principles together are also called the CIA triad, which is a model for making information security policies within an organization (Khansa and Zobel, 2014). Business organizations in many industries use the CIA triad as guidelines for their analysis, evaluation, planning, and implementation of their corporate information security (Lopez and Oliveira, 2014).
Confidentiality refers to the company protecting their information assets by restricted authorized entities (Keung, 2014;Lopez and Oliveira, 2014). The company is required to ensure the authority for different groups of users (Lopez and Oliveira, 2014). For example, executives are authorized to access all corporate information, while employees are only authorized to access information associated with their jobs. Integrity refers to the company protecting its information assets from unapproved modification (Keung, 2014;Lopez and Oliveira, 2014). The company should ensure integrity that corporate information (e.g., data) cannot be edited without any authorized permit (Khansa and Zobel, 2014). For example, if integrity can be violated easily, employees are able to change their salary in the corporate accounting systems. Availability refers to the company protecting its corporate information asset from unapproved interruption (Keung, 2014;Lopez and Oliveira, 2014). The company should ensure the availability of information systems to meet their needs in a timely manner (Keung, 2014). Also, the system should be reliable for access and usage (Khansa and Zobel, 2014).
Security Operations and Investigation
The objective of security operations is threefold. First, it provides a process that allows an organization to manage its overall security operations. As part of this process, organizations are able to develop mechanisms that allow security staff to investigate and decide on policies and methods used on the security operations, based on the analysis of the potential benefits and level of risks (Helen Morris, 2012) Second, it allows the organization to understand and evaluate how the security operations provided enable it to achieve desired outcomes. It will also establish a mechanism for tracking how security operations can respond to risks in an organizational environment (Helen Morris, 2012). Finally, it provides control over which services are offered with what level of security and under what conditions (Helen Morris, 2012). In order to develop a comprehensive security operation overview, we consider an investigation, incident management, and disaster recovery in this section.
It is often the case that security operations initiatives are unsuccessful when corporations ignore the processes and attempt to fix the problem right after it occurs. Security operations must include the standards and policies, which may require adaptation of best-practice methods for the individual circumstance. It is vital to have a consistent approach for people at all levels of the organization. The organization must begin with setting a clear European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020 3 strategy and defining the policies that drive the way it will be achieved. With those policies in place, organizations will be able to control the way they carry out their security operations (Morris, 2012).
Effective investigations will help to reduce guesswork by revealing associations hidden in the data. Those associations are useful for security operations. The security team will respond appropriately to mitigate or eliminate threats and uncover meaningful patterns. For security investigation, the processes can be summarized as data collection, monitoring and analysis, and solutions. Corporations use different methods and tools to make the business safer. Monitoring, data intake, and initial response are the essential responsibilities of enterprise security operations. Data gathering would be the first step in the investigation. Security management relies on a data-driven decision. During this stage, corporations will gather and extract information from the vast amount of available data. The security team will ensure that they will collect incident data consistently and accurately. Then they will analyze these data to derive useful information about security issues and educate upper management about the variety and intensity of threats to the corporations. Organizations can take advantage of data to produce usable insights to guide their decisions. These insights will be used to support activities across the entire organization (McIlravey, 2015).
Incident management and disaster recovery
Effective incident management will improve availability, ensuring that users will get back to work quickly following a security problem. An incident management approach would have helped to resolve the issue in a shorter amount of time. Project teams with a good understanding of policies and procedures help provide a realistic assessment of business impact. (Helen Morris, 2012). The effect of this incident is to delay the project and dissatisfy the internal customers. The project leader decides to address the issue to the stakeholders and documents it as lessons learned. Due to the impact of this case, incident management is visible to the business and demonstrates its value. The company defines its incident management process to control better vendors to increase service availability by reducing service downtime.
A disaster is defined as a severe disruption of the functioning of a company. Nature disaster is deemed as a common disaster, which includes floods, hurricanes, earthquakes, and volcano eruptions that have immediate impacts on human health and enterprise operations (Wcpt.com, 2014). Organizations must recover operations should any type of disaster occur. A detailed disaster recovery plan should be reviewed on a quarterly basis. Consider this scenario, you are a service manager in charge of operations for a larger manufacturer. In the early morning of a business day, you receive a phone call from an engineer, saying there was an earthquake in his region, several systems were impacted. He is waiting for your instructions and guidance. In this case, you need to follow the disaster recovery plan to assess and remedy this situation.
Security assessment and testing
An analysis of Security Management is not complete without an understanding of a robust Security Assessment and Testing program. Security and risk management enables a business to meet regulatory issues and provides a minimum standard of compliance. Asset Security directly hardens the end devices that are most susceptible to a loss of Confidentiality, Integrity, or Availability. However, Asset Security does not necessarily focus on the businesses' most critical assets or key infrastructure. Security Operations assist in rectifying any incidents that occur on a network quickly and efficiently. This leaves Security Assessment and Testing as the sole program to identify problems and make security improvements to the information systems in a way that prioritizes key infrastructure and critical assets that have been identified by the Security and Risk Management program of an organization.
For a security assessment and testing program to be successful, the test must include a gamut of automated scans, tool-assisted tests, and manual efforts to challenge the security, as hackers will have equal access to all of these capabilities. Additionally, tests must occur regularly but not set schedules, thus ensuring specific information systems are not neglected, and tests do not occur on easily predicted days every month or quarter. However, the frequency and depth of the test should correspond to the business value of the system to the organization. For example, if BYOD is one of the organization's critical resources for performing essential functions of business, then the security team should prioritize test to focus on these devices. Many other factors should be considered when scheduling testing. There are only so many security testing resources available. Thus, it might behoove management to conduct limited automated scans on less valuable assets and full scans with manual oversight on increasingly critical applications. Almost counterintuitively, certain intense scans have the potential to cause harm to an information system due to stress or technical failure, causing loss of availability, and should be carefully selected when applied to critical systems and only conducted during authorized periods of service interruption, previously agreed upon with business management. (Stewart et al., 2015) An administrator cannot only conduct an automated scan of a critical system and conclude a test. A thorough review of the test must be performed, and the results logged and analyzed for potential vulnerabilities. Once the review has been completed, an assessment report is created that details the success or failure of the test, the findings, European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020 4 and the recommended corrective actions. Additionally, the report also presents the threat environment to that specific system and the current and future risks (Stewart et al., 2015). Finally, the security assessment report must include perspective into the company's security posture weighted against technology-specific standards. Yet, it must also be presented in a way that management can plainly understand the risks and proper recommendations (Krause and Tipton, 2006).
Security Assessment
There are two main types of vulnerability scans; network vulnerability and web vulnerability. Some tools can do both types of scans. Nmap is one of the most popular open-source tools for conducting basic network scanning. Nmap has the ability to scan a subnet and identify the current state of ports on a network. Additionally, utilizing the OS detection setting, Nmap can determine essential characteristics of what operating systems are running on a network. (Shaw, 2015) While Nmap does not provide all of the use cases for a network assessment, it is undoubtedly a good start. Nessus, on the other hand, incorporates Nmap into its network vulnerability assessment and goes several steps farther. Once the network has been enumerated with Nmap scans, Nessus can conduct any number of tests on the web. In simple terms, it does this by probing open ports for known vulnerabilities in its database.
When the probe results in success, Nessus provides a report to the administrator. Similar to network vulnerability scanning, Nessus also provides web vulnerability scanning. The main difference between the two is that the web vulnerability scanner is generally able to probe deeper into the configuration of the webserver as compared to the network assessment probe of individual hosts. (Stewart, J. M., Chapple, M., & Gibson, 2015). These tests can support patch management and can determine if there are any systems on the network out of compliance with the latest updates. Also, Nessus supports scheduled test that can be run regularly without the need for manual intervention (Kumar, 2014).
While the information security manager is not going to be the individual configuring NMAP and Nessus scans, it is crucial to understand the necessary capabilities and limitations of the most common industry-accepted tools. Like vulnerability assessments, penetration tests also have their own unique set of industry-accepted tools. A penetration test is a legal and authorized attempt to exploit vulnerabilities on an information system or network. (Engebretson, 2011) Generally, a penetration test targets a specific system or systems and utilizes a gamut of tools and tactics to gain access and demonstrate a flaw. Like vulnerability assessments, the end game is to provide a detailed report of all the flaws identified during the test and provide recommendations for hardening.
Security Testing
There are three different types of penetration tests. White box testing is conducted within the organization. In this test, the security professionals conducting the test are completely familiar with the network. Black box testing is the opposite and represents a test in which the attackers have no information about the network prior to conducting the penetration test. Gray box testing is some combination of the two (Muniz, Lakhani 2013). An excellent example of this is the Department of Defense's new "Hack the Pentagon" bug bounty program. This gray box test, in which the organization provides details of the authorized target network and implements rules and regulations to vetted hackers, represents a new trend in cybersecurity. Through the use of crowdsourcing, organizations pay white hat hackers to test their system in a way that creates a more secure network. (U.S. Department of Defense) These tests provide cash payouts in the thousands of dollars to participants who are able to find vulnerabilities on an organization's network. In many cases, these have a direct business advantage, saving potentially millions of dollars spent on cleaning up a compromise or paying for credit monitoring services for customers after a data breach.
As with vulnerability scanning, an information security manager needs to understand what tools are available and the different phases of the test. Many penetration testing tools are open source and can potentially save businesses thousands of dollars compared to comparable proprietary software. One such example is the penetration tool kit known as Kali Linux. Kali is a Linux distribution designed to provide tools for each level of a penetration test. It would be an exhausting exercise to detail all of the tools available at each level, so we will merely provide the different phases of a penetration test. The first phase is surveillance, in which the penetration tester develops an understanding of the target network with various scanning tools. The second step is the target evaluation. The penetration tester may utilize the same tools available in a vulnerability scan, such as Nessus, to evaluate a target for weakness. In the third phase, the attacker attempts to obtain a foothold using exploitation tools such as Metasploit. In the fourth phase, the goal is to escalate privileges and potentially gain root level or administrative level privileges to a system. The final step is to maintain a foothold establishing multiple access methods and removing evidence of access (Muniz, Lakhani 2013). As with vulnerability scanning, a penetration test is not complete without a full report of access gained and recommended mitigating actions. Both types of test assist an information security manager in maintaining a high level of confidentiality, integrity, and availability, thus increasing overall business performance. European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020
Developing an Integrated Information Security Framework
In order to achieve the three core objectives of information security, a number of information security risk management frameworks are presented in this section. We develop an integrated security framework (Table 1) based on the NIST Cybersecurity Framework, and the IBM information security capability reference model are presented in this section. The NIST framework focuses on the principles of Identify, Protect, Detect, Respond and Recover. In contrast, the IBM framework stresses the principles of People and Identity, Data and Information, Application and Process, Network Server and Endpoint, and Physical infrastructure. We choose these two frameworks because they are industry best practices applied across different industries. On the other hand, these two actionable frameworks provide metrics that allow us to understand security management from a holistic overview. Table 1 presents a combined framework based on two security frameworks.
Scenario Case Study: Bring Your Own Device
Bring Your Own Device (BYOD) is one of the primary digital transformation for business due to its simplicity and inexpensive costs (Wang and Nemati, 2016). The BYOD strategy allows employees, business partners, and others to personally select which devices they would like to use to effectively maximize their needs for the company and needs for themselves (Wang and Nemati, 2016). Due to the increase of supply of these services, higher demand for security is needed, and the future trends of BYOD equate to higher standards of education and training. Currently, IT leaders have been using the BYOD adoption, and most IT leaders have a positive view upon BYOD, and they see it as inevitable (Yabubu, 2013). It is known that BYOD improves employee satisfaction, and that result is a pleasant experience for CIOs and their organization (Yabubu, 2012). However, some of the most significant challenges facing security are ironically also some of its greatest assets.
BYOD Security Challenges
End to end encryption, BYOD policies are some of the hottest topics in IT security today. Risk management, wellmanaged security operations, and regular security assessments can help to alleviate these challenges. More importantly, BYOD is a way to access information that isn't owned and managed by the IT department (Yakubu, 2013). BYOD can be a problem due to the control and security of corporate data being in the hands of employees and the possibility of sensitive data being exposed (Yakubu, 2013). On the other hand, BYOD can increase work European Journal of Business and Management www.iiste.org ISSN 2222-1905(Paper) ISSN 2222-2839(Online) Vol.12, No.18, 2020 efficiency and flexibility by allowing employees to work from anywhere (Yakubu, 2013). Challenges for the future of BYOD are the security risk due to the possibility of sensitive data getting into the wrong hands.
In addition, BYOD policies are a particularly unique challenge for security (Wang and Nemati, 2016). Until recently, giving employees the choice of bringing their own devices to use in the workplace would have been such a security concern it would have been unimaginable. However, as corporations increase their use of mobile devices, the need to have this choice is becoming increasingly demanded and required. With the increasing demand for access to an organization's data using personal devices that are not managed by an IT department, multiple types of threats that can prove detrimental to an organization have ascended.
Initially, IT staff attempted to defend these mobile devices using the same type of software they used for computer terminals. The problem with this approach was that there are too many different types of mobile operating systems to make this practical. To mitigate this developing security risk, companies have begun incorporating mobile device management (MDM) software. MDM software is software that is installed on employee's devices to prevent the installation of malicious apps. It also provides encryption of sensitive data and attempts to segment the personal data on the device from the business data. However, installing software onto employee's own devices may introduce grave privacy concerns regarding the separation of corporate and personal data. Consider when an individual leaves the organization. How do we know they are not going to take corporate secrets with them on their way out? On the other hand, the privilege to use their own device to access sensitive corporate data means that they will have to give up certain aspects of privacy for secure access (Stewart et al., 2015).
Possible Solutions Via Framework
According to the framework, organization need to implement encryption mechanisms in accordance with their business process. Encryption provides confidentiality and integrity. While you might be thinking to yourself, "how can encryption present a challenge for security professionals?" you must consider the fact that encryption reduces the visibility of traffic on the network. Sensors that used to produce alerts for malicious web traffic are much more restricted. Security administrators that we're able to implement data loss prevention software to detect insider threats or corporate espionage have a much more difficult time with the amplified use of end to end encryption. Utilizing qualitative risk frameworks and quantitative measurements, security management must decide how to mitigate this risk. One of the options is to implement an expensive host-based security system that can see through end to end encryption and identify threats at the endpoints. Additionally, security teams should conduct regular security assessments in which penetration testers attempt to exfiltrate encrypted information bypassing standard network sniffing tools. While these options are supportive, encryption will continue to evolve and security administrators must learn to adapt.
In addition, organizations can also use framework to lookup relevant software for securing their devices. Like all critical IT infrastructure and key resources, but especially newly incorporated software, businesses should implement a rigorous security assessment and testing strategy for BYOD. Finally, the company must ensure they are prepared to conduct security operations on the given devices if necessary. Specifically, this means developing standard operating procedures for conducting investigations on an individual's personal device. Not only do we have to consider the confidentiality and integrity of the data on the device but also the availability to the user. With that said, BYOD may actually improve security operations as it could provide an alternate means for the business to function during disaster operations.
Conclusion
Security is an ever-evolving challenge for management (Luftman et al., 2016). As such, information systems professionals should be broadly familiar with the many management and planning issues that involve the security domain (McKeen and Smith, 2012). So this study attempts to provide a comprehensive understanding of IT security management for business organizations. Combined two market leading security frameworks (i.e. NIST and IBM), we use this integrated framework as a lens to understand current situation of IT security management. In particular, we focus on a number of critical fundamental functions of IT security management: Security and Risk Management, Security Operations, and Security Assessments and Testing. We believe that these functions provide the fundamental security services expected from a business management perspective. If these security functions are executed appropriately, an organization would have a high return on investment for businesses. We suggest that future work could be focused on strengthening the framework into an individual organization, contributing more insights regarding information security management for specific purpose. | 2020-07-02T10:35:01.895Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "854f13f5007d3b5a84d54de7be6d9d4ca30cae61",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/EJBM/article/download/53150/54920",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5c8a28305f96c63f82852672a5737b55cae370d5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245671348 | pes2o/s2orc | v3-fos-license | Evaluation and management of orthostatic hypotension: Limited data, limitless opportunity
Although orthostatic hypotension is common and can have serious consequences, recommendations about its evaluation and management are based on limited data. Here, the author outlines a systematic approach, noting the areas that pose an opportunity for improvement. Orthostatic hypotension is common and can have serious consequences. The author offers a systematic approach to evaluation and management.
A n 83-year-old woman was transferred from another hospital because of refractory orthostatic hypotension (OH) and recurrent syncope for the past 3 months.She had been healthy through her life other than for well-controlled hypertension and hyperlipidemia.She lived independently and was very functional.On admission, she could not stand for more than 1 to 2 minutes because of severe presyncopal dizziness.Her review of systems was otherwise negative, aside from frontal headaches that happened primarily when her blood pressure (BP) was high, and constipation, which had been worse recently.
Her medications at the time of transfer included midodrine 10 mg three times a day, fl udrocortisone 0.1 mg daily, and atorvastatin.Supine, her BP was 172/94 mm Hg and her heart rate (HR) was 64 beats per minute.Sitting, her BP dropped to 108/72 mm Hg with an HR of 76 beats per minute.After standing for 1 minute her BP dropped to 66/42 mm Hg while her HR increased only to 84 beats per minute.She immediately sat down because of presyncopal dizziness.Other fi ndings on examination, including a complete neurologic examination by a neurologist, were unremarkable.
She had already undergone many tests with normal results.These included a complete metabolic panel; complete blood cell count; thyroid function tests; urinalysis; electrocardiography; echocardiography; chest radiography; brain magnetic resonance imaging; auto-antibody serologic testing (antinuclear antibody, Sjögren syndrome antibody A, Sjögren syndrome antibody B); tests for human immunodefi ciency virus, Lyme disease, hepatitis B, and hepatitis C; vitamin B profi le; vitamin D PEIXOTO levels; and serum protein electrophoresis and free circulating light chains.
Which is the most appropriate next diagnostic test for this patient?• Formal autonomic nervous system testing • Serum paraneoplastic and autoimmune neuroautoantibody panel • Abdominal fat pad biopsy • Electromyography and nerve conduction studies • Skin biopsy to measure nerve fi ber density.
The answer lies in an understanding of OH and key elements of the evaluation.
■ ORTHOSTATIC HYPOTENSION DEFINED
OH is present if the systolic BP drops by more than 20 mm Hg or the diastolic BP drops by more than 10 mm Hg. 1 The systolic BP is preferred because it has better association with cerebral blood fl ow and symptoms. 2,3If the patient is hypertensive, then a systolic drop of more than 30 mm Hg is the threshold. 1ADAPTATION TO STANDING When we stand up, gravitational forces lead to blood pooling in veins of the lower body, amounting to about 500 to 800 mL.About 50% of the pooling occurs in the thighs, 25% in the lower legs, and 25% in the pelvis.Given the increased venous hydrostatic pressure, plasma fl uid leaks into the interstitial space, leading to a modest (10%-15%) decrease in plasma volume, decreased BP, and decreased pulse pressure (a useful marker of decreased stroke volume).These hemodynamic changes lead to decreased arterial baroreceptor fi ring, which in turn leads to increased sympathetic tone and decreased parasympathetic tone.This immediate response is what leads to the appropriate responses of tachycardia, arterial vasoconstriction, venoconstriction, and increased cardiac contractility.There are also increases in antidiuretic hormone and angiotensin II, but these take longer to take effect.In short, the immediate adaptations to orthostatic stress are primarily mediated by enhanced sympathetic activity.
OH develops when these compensatory measures fail.OH is very common, affecting up to 30% of ambulatory patients, especially at older age.Hospitalized patients also have high rates, particularly transient OH related to immobility and volume depletion.OH causes troublesome symptoms such as orthostatic dizziness and lightheadedness, fatigue, visual blurring, muffl ed hearing, pain in the neck and shoulders ("coat-hanger" symptoms), and impaired concentration, as well as syncope and falls, often with injuries.However, many patients are completely asymptomatic despite severe reductions in BP. 3 A meta-analysis of available observational cohorts showed that OH is associated with signifi cantly increased risk of death (risk ratio 1.50), coronary disease (risk ratio 1.41), stroke (risk ratio 1.64), and heart failure (risk ratio 2.25). 4Despite extensive observational data identifying these risks, there are no clinical trials demonstrating that this risk can be modifi ed by therapy.
■ EVALUATION OF ORTHOSTATIC HYPOTENSION
Following appropriate procedure is essential for accurate identifi cation of OH.BP and HR are measured with the patient supine after at least 5 minutes of supine rest. 1 The patient then is tilted up or, in the offi ce, the patient stands up, and BP and HR are measured at 1 minute and 3 minutes.Seated measurements are not needed, although I often obtain them to allow patients with severe OH to adapt before standing, and knowledge of seated BP levels is important as part of monitoring patients under treatment.Supine BP values are useful to identify supine hypertension (see discussion below).Standing values provide us a measure of the severity of OH.In treated patients, measurements at the peak of action of drugs assess the effectiveness of therapy.Seated values, on the other hand, serve as a marker of safety as they identify both hypotension in untreated patients and excessive BP elevation in patients with treated OH.
Is there an appropriate heart rate response?
If the patient has OH, the fi rst and critical question is whether there is an appropriate HR response (Figure 1).
As BP falls, the HR should increase in response.An appropriate HR response is defi ned by the ratio of the change in HR to the change in systolic BP with head-up tilt or About 50% of venous pooling is in the thighs, 25% in the lower legs, and 25% in the pelvis standing. 5,6In patients with intact autonomic responses, this ratio is greater than 0.5: for example, if the systolic BP falls by 40 mm Hg, a normal HR response should be an increase of greater than 20 beats per minute. 6A ratio less than 0.5 identifi es a neurogenic component with good sensitivity (91%) and specifi city (88%). 6se of this ratio is an important recent advance in the evaluation of OH, though a recent study corroborated its sensitivity but demonstrated very low specifi city (50%). 7herefore, it is likely that further refi nement of the procedure will be needed.
If there is an appropriate HR response, think of common causes, such as volume depletion of any cause, vasodilator drugs, venomotor incompetence (very often associated with immobility), or systemic vasodilatory states.
If the HR response is inadequate, possibilities include the use of a negative chronotropic drug (eg, beta-blocker, verapamil, diltiazem, ivabradine), the presence of a cardiac conduction defect (easily identifi ed by an electrocardiogram and often requiring a pacemaker for effective management), or autonomic failure (neurogenic OH).
What are the neurogenic causes of orthostatic hypotension?
Autonomic neuropathy is a common cause of neurogenic OH.Possible etiologies of autonomic neuropathy are too numerous to list but include diabetes mellitus, amyloidosis, toxic neuropathies (drugs, heavy metals), infections, autoimmune diseases, hereditary conditions, paraneoplastic syndromes, and metabolic disorders.Table 1 provides a summary of the most common causes of peripheral autonomic neuropathies to help guide further diagnostic testing based on clinical plausibility.
An approach to sorting out the neurogenic causes of OH involves considering the type of associated neurologic fi ndings (if any) and whether the onset of the OH was acute/subacute or chronic and progressive. 8Using this approach, the following 5 distinct categories arise: A detailed medication review should identify drugs that may lower BP or predispose to OH.These include antihypertensives, diuretics, anticonvulsants, antipsychotics, antidepressants, opioids, and benzodiazepines.
Testing includes electrocardiography, complete blood cell count, complete metabolic panel, thyroid function tests, and urinalysis for all patients.Patients without obvious neurologic fi ndings often undergo further testing guided by the nature of the fi ndings.Many patients benefi t from echocardiography to rule out pericardial disease, pulmonary hypertension, severe valvular disease (especially aortic stenosis), and left ventricular dysfunction.Likewise, a cosyntropin stimulation test may be done to rule out adrenal insuffi ciency.
Many other tests have limited data to support them but may be used creatively in the management of complex cases.For example, I often use bioimpedance to objectively measure extracellular fl uid volume when unsure of the level of volume repletion in a patient, allowing me to adjust some of the treatments that target volume expansion (salt tablets, fl udrocortisone).Likewise, autonomic testing equipment with beat-to-beat BP monitoring can provide hemodynamic data (stroke volume, cardiac output, peripheral resistance) that can help guide adjustments in medications.The equipment I use for autonomic testing (Finapres NOVA) has a hemodynamics module useful in complex cases, though this approach has only been used anecdotally and has not been tested in clinical trials.
A detailed autonomic evaluation using beat-to-beat BP and HR monitoring (during tilt and Valsalva maneuver) and quantitative sweat responses may have value.But usually, when patients present with OH due to autonomic failure, the diagnosis is obvious, and autonomic testing usually adds little.
Electromyography, nerve conduction studies, skin biopsy to quantify nerve fi ber density and identify amyloid fi brils (and possibly alpha-synuclein), and targeted serologic evaluation can be of value in the evaluation of patients with peripheral neuropathic fi ndings.
Brain imaging is always done for patients with motor fi ndings and includes magnetic resonance imaging.Sometimes magnetic resonance or computed tomographic angiography of the head and neck may be useful to evaluate the vertebrobasilar circulation in patients who develop severe orthostatic symptoms at BP levels that are not very low (eg, systolic BP > 120 mm Hg).
A dopamine transporter scan may be of value to confi rm a diagnosis of Parkinson disease, multiple system atrophy, or dementia with Lewy bodies.
Finally, cardiac 123 I-meta-iodobenzylguanidine scintigraphy or 18 F-fl uorodopamine positron emission tomography may help distinguish between multiple system atrophy and Lewy body synucleinopathies (Parkinson disease and Lewy body dementia).In the former, there is preserved cardiac autonomic innervation, whereas in Parkinson and Lewy body dementia, cardiac uptake of catecholamines is decreased. 10
■ MANAGEMENT OF ORTHOSTATIC HYPOTENSION
Patients with nonneurogenic causes of OH can usually be managed with treatment of underlying disorders, removal of offending agents, and volume replacement.Likewise, a pacemarker may be needed for patients with qualifying conduction defects.Most causes of OH requiring long-term treatment are neurogenic.A consensus panel assembled by the American Autonomic Society and the National Parkinson Foundation recommends a stepwise approach to the treatment of neurogenic OH. 11 Step 1 is a detailed medication review to identify drugs that often cause OH.Longacting antihypertensives almost always should be stopped.When absolutely needed, administration should be at night.Antidepressants and anticonvulsants may have to be reconsidered.
Step 2 is the addition of nonpharmacologic measures.Exercise increases muscle tone and improves venomotor competence, reducing venous pooling, but should be either recumbent (eg, on a recumbent bike or rowing machine) or aquatic (swimming or pool-walk-Fludrocortisone and a vasoconstrictor can be combined; if the patient is already receiving both, then pyridostigmine or atomoxetine can be added PEIXOTO ing) to maximize tolerability.
I recommend high sodium (> 150 mEq/ day) and fl uid (at least 2 L/day) intake to most patients.A premeal water load such as drinking 500 mL of water in about 5 minutes can be useful, especially if the patient has signifi cant postprandial symptoms.In patients with au-tonomic failure, there is a signifi cant increase in BP for 60 to 90 minutes in response to the osmo sympathetic refl ex whereby a decrease in osmolality of splanchnic blood results in an increase in sympathetic tone. 12 also recommend external venous compression to all patients.Compression stock- ings should ideally come up to the waist to maximize the extent of compressed venous territory.Because the venous pressure at the level of the hips is about 30 mm Hg, patients should preferably wear garments that have a "30-40 gradient" (30 mm Hg at the thigh or waist and 40 mm Hg at the ankle), but some patients cannot tolerate the compression due to discomfort.In addition, some patients cannot get them on, so a compromise with lower compression garments (20-30 mm Hg or 15-20 mm Hg) is often needed.Most patients tolerate waist-high garments except for those who have urinary frequency or signifi cant abdominal bloating or pain.
Step 3 is drug treatment.Despite the absence of high-quality evidence to support their use, 13,14 the cornerstone drugs are fl udrocortisone, midodrine, and droxidopa; pyridostigmine and atomoxetine are used less often.Table 2 summarizes relevant pharmacologic and clinical features of these agents.Only midodrine and droxidopa are approved by the US Food and Drug Administration (FDA) for use in OH.All other medications are used offlabel.
Fludrocortisone is a synthetic mineralocorticoid that increases extracellular fl uid volume and increases sensitivity to catecholamines. 15ecause of its long duration of action, sustained hypertension (particularly at night) is often a problem limiting its use.
The vasoconstrictors midodrine and droxidopa are short-acting and therefore more useful for treatment during the daytime while avoiding supine hypertension at night.In one study, midodrine signifi cantly increased the time to development of syncope or nearsyncope on tilt testing by about 600 seconds, though not all patients responded. 16Droxidopa is less potent than midodrine, but it does cause a signifi cant increase in BP compared with placebo, along with a decrease in orthostatic symptoms. 17,18idrodine and droxidopa have never been compared against each other, but individual patients respond differently.Some have a greater response to midodrine than to droxidopa, and some, the reverse.We do not yet know the reason for these differences nor can we predict how patients will respond, so in practice, if one drug does not work well, I try the other.Combining droxidopa and midodrine has not been formally tested.Anecdotal experience has been at times successful. 19yridostigmine is an acetylcholinesterase inhibitor that increases cholinergic transmission in autonomic ganglia and peripheral nerves.It has a modest and inconsistent effect on OH. 20,21 The ganglionic effect increases sympathetic tone, particularly in response to orthostatic stress, thus limiting the occurrence of supine hypertension.
Atomoxetine is a selective norepinephrine transporter inhibitor with inconsistent effects on orthostatic BP, 22 but in one recent study it was noted to improve standing BP similarly to midodrine while producing marginally larger improvements in orthostatic symptoms. 23ther medications used much less frequently, usually as last options when nothing else works, include octreotide, erythropoietin, desmopressin, pseudoephedrine, and ergot derivatives. 13y opinion-based approach to initial therapy.If the patient has no supine hypertension, I start with either a vasoconstrictor or fl udrocortisone.I prefer vasoconstrictors not only because they are FDA-approved, but also because they can be used on an as-needed basis to treat intermittent symptoms, which is often the case, especially in patients with mild disease or early in the course of a progressive disease.If patients have no heart failure, edema, or hypokalemia, one can use either fl udrocortisone or a vasoconstrictor, but the presence of any of these conditions argues against using fl udrocortisone.I use pyridostigmine as the fi rst choice only if a patient has mild neurogenic OH and signifi cant constipation or gastroparesis, as it allows me to treat both the OH and the gastrointestinal hypomotility.
Step 4. Fludrocortisone and a vasoconstrictor can be combined.If the patient is already receiving both, then pyridostigmine or atomoxetine can be added.
Importantly, most of the trials to support the above treatments are small, uncontrolled observational studies.There is much need for improvement.For example, we have no drugs to specifi cally target the impaired venomotor tone.Perhaps a drug that blocks the natriuretic peptide receptor could cause valuable Supine hypertension is a common complication of orthostatic hypotension, affecting 40% to 70% of patients PEIXOTO venoconstriction-picture it as the opposite of a nitrate or nesiritide.Alternatively, noncatecholamine vasoconstrictors (vasopressin, angiotensin II) are available for intravenous use in critically ill patients, but these are not yet translated to viable oral options that could be used to treat neurogenic OH.Desmopressin is a vasopressin V2-receptor agonist with limited pressor function.Its modest favorable effects in neurogenic OH are likely related to decreased nocturnal urine output, not vasoconstriction.Terlipressin, on the other hand, is a potent vasopressin V1-receptor agonist used in patients with hepatorenal syndrome.It has a potent pressor effect in patients with neurogenic OH when given intravenously 24 but is not available in oral form.Additionally, and very importantly, we do not know the long-term impact of therapy on patientreported outcomes, functional outcomes (injurious falls, syncope, cognition), or cardiovascular outcomes.
■ SUPINE HYPERTENSION
Supine hypertension is a common complication of OH, affecting 40% to 70% of patients, adding complexity to patient management.It is graded as mild if the supine BP is 140-159/90-99 mm Hg, moderate if 160-179/100-109 mm Hg, and severe if 180/100 mm Hg or higher, as measured after at least 5 minutes of supine rest. 25I usually accept supine BPs up to 160/100 mm Hg, and depending on the severity of the OH, I may be forced to accept pressures as high as 180 mm Hg.In such cases, 24-hour BP monitoring is extremely helpful to quantify the overall BP burden.
The approach to its treatment is fi rst nonpharmacologic.Fludrocortisone should almost always be stopped.Vasopressors should not be given within 4 to 6 hours before going to bed.Elevation of the head of the bed, typically about 8 inches, is helpful but often not well tolerated.If using an adjustable mattress, the head of the bed is elevated about 30 degrees and, if adjustable, the foot of the bed is lowered by a similar amount.Also, if the presence of diabetes or obesity does not prohibit it, I often recommend a high-carbohydrate snack at bedtime if patients have a demonstrable response to it.The typical effective dose is 200 to 400 calories (50-100 g) in the form of pure carbohydrates, eg, candy.Sensitivity varies, and many patients have a good response to smaller doses.
Pharmacologic management is often needed. 26Because of the problem of OH during the day, long-acting agents cannot be used.Shortacting antihypertensive drugs are given at bedtime.Several agents can effectively lower BP, but my personal preference for initial use is nitrates.Most of the studies have used topical nitroglycerin, 27 though to avoid hypotension, patients have to wake up early to remove the patch and stay in bed for 30 to 60 minutes before getting up.Because of this, I prefer isosorbide dinitrate (starting dose 20 mg, titrated up to 80 mg as needed).
Clonidine (0.1 mg orally) and nitroglycerin lower nighttime BP to a similar degree, but nitroglycerin has less residual BP-lowering effect in the morning. 27Clonidine is often helpful in patients with residual sympathetic tone, which is most commonly observed in patients with multiple system atrophy.
Other drugs tested in single-dose trials include sildenafi l, captopril, losartan, nebivolol, eplerenone, minoxidil, and hydralazine, with variable results and often a "tail effect" in the morning. 26Even though losartan is relatively long-acting, surprisingly it does not worsen morning OH, presumably due to increased angiotensin II levels. 28It is a drug I prescribe often, particularly in patients with chronic kidney disease or heart failure with reduced ejection fraction, in whom the use of a blocker of the reninangiotensin system has signifi cant benefi ts.
■ CASE CONCLUDED
In our patient, the rapid pace of development raised the concern for an acute autonomic ganglionopathy.Acute autonomic neuropathy is called ganglionopathy because the lesion is at the autonomic ganglia. 29This is a rare disorder in which patients present with acute or subacute pandysautonomia (orthostatic hypotension, neurogenic bladder, gastrointestinal hypomotility, pupillary dysfunction, hypohidrosis) in various combinations.It is typically immune-mediated and can be transferred passively in animal models.The initial description was caused by antibodies against the Short-acting antihypertensive drugs may be needed at bedtime to treat supine hypertension ganglionic acetylcholine alpha 3 receptor. 30hese antibodies have also been described in paraneoplastic autonomic ganglionopathy, although in that condition the most common antibody is the antineuronal nuclear antibody type 1 (ANNA-1, formerly called anti-Hu antibody). 29These antibodies are tested using commercially available neuroautoantibody panels.Several other rare antibodies have been described, and 30% to 50% of patients presenting with the classic syndrome are seronegative.The severity of the elevation of antibody titers often correlates with the clinical presentation.It is likely that seronegative patients have antibodies against epitopes not yet identifi ed, as many improve with immunomodulatory treatments. 31Treatments reported include plasma exchange, intravenous immunoglobulin, and a variety of immunosuppressants. 29,32Our protocol includes intravenous immunoglobulin with or without steroids.
Given this possibility in our patient, we obtained a neuro autoantibody panel (Mayo Clinic Laboratories).The patient had moderately high titers of antibody against the ganglionic acetylcholine receptor.Given her age, we suspected a paraneoplastic syndrome despite a lack of symptoms, but no tumor was identifi ed on computed tomography (neck to pelvis), in addition to a normal recent colonoscopy.Sometimes the syndrome presents before a malignancy is clinically identifi able.However, in its absence, we diagnosed her as having autoimmune autonomic ganglionopathy with predominant cardiovascular involvement (and perhaps mild gastrointestinal disease, given the constipation).We treated her with intravenous immunoglobulin (2 g/kg over 5 days) and intravenous methylprednisolone (500 mg/day for 5 days).She had a positive response and was able to walk out of the hospital and to attend rehabilitation 3 weeks after treatment was started.She remained on biweekly intravenous immunoglobulin for 2 months and on monthly doses for another 4 months.She continued to have OH but regained reasonable orthostatic tolerance and returned to independent living on maintenance therapy with midodrine 5 mg 2 to 3 times daily.Her current orthostatic tolerance is in the range of 7 to 10 minutes.
As for the other possible answers to the question regarding the most appropriate test for our 83-year-old patient, autonomic testing would not have given additional information.Amyloid was not likely based on the rapid rate of progression (ie, within 3 months) and the negative screen for AL amyloid.Hereditary amyloid forms and AA amyloid were clinically improbable.Electromyography and nerve conduction studies would probably not have helped as the patient had no peripheral sensorimotor fi ndings.Skin biopsy could be useful to identify decreased nerve fi ber density as seen in small fi ber neur opathies, but the presentation did not suggest this.
■ DISCLOSURES
Dr. Peixoto has disclosed research/independent contracting for Bayer, Boehringer-Ingelheim, Lundbeck, and Vascular Dynamics; serving as advisor or review panel participant for Ablative Solutions and Relypsa Pharmaceuticals; and serving as consultant/advisor or review panel participant for Diamedica Therapeutics.This presentation discusses off-label-use of medications: fl udrocortisone, pyridostigmine, octreotide, and atomoxetine.
1. No neurologic symptoms, acute or sub- acute onset
Synucleinopathies (Parkinson disease, multiple systems atrophy, Lewy body dementia, pure autonomic failure) Figure 1.Diagnostic approach to orthostatic hypotension.a Delta HR/delta SBP ratio is the ratio of the change in heart rate divided by the change in systolic blood pressure with standing or head-up tilt.Most patients with neurogenic orthostatic hypotension have a ratio below 0.3.Most patients with a normal autonomic response have a ratio above 1.0.on August 2, 2024.For personal use only.All other uses require permission.www.ccjm.orgDownloaded from
Peripheral neuropathic symptoms, acute or
subacute onset.Consider paraneoplastic syndromes, Sjögren syndrome and other connective tissue diseases, and toxic exposures.5. Peripheral neuropathy, chronic progressive onset.Consider diabetes, amyloidosis, autoimmune disorders, infections, toxic exposures, and metabolic or hereditary disorders. | 2022-01-05T14:07:28.732Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "f0c0b498ab8cefa6922c490a1d4f27861304ece5",
"oa_license": null,
"oa_url": "https://www.ccjm.org/content/ccjom/89/1/36.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c75d002849774014d6dd7e2413818b1305dc7dd4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134855135 | pes2o/s2orc | v3-fos-license | Experimental study on scouring of coarse sand and fine sand seabed caused by propeller washing in front of solid wharf
AJSRE: http://escipub.com/american-journal-of-scientific-research-and-essays/ 1 Shuo Zhang, AJSRE, 2018; 3:14 AJSRE: http://escipub.com/american-journal-of-scientific-research-and-essays/ 2
Introduction
With the use of high-power engines on ships and the improvement of maneuverability of ships, the scouring of propeller washes in front of physical docks is becoming an important part of scouring research because it directly affects the water quality environment of inland or shallow seas. Stability of the structural basis [1] .
Previous studies have focused on the effects of propeller position, wash flow field, and physical dock position on the flushing of the propeller wash stream. However, the actual scouring situation is that, considering the randomness of the ship's mooring or anchoring, the flushing of the propeller wash stream cannot be completed continuously in one time, but is intermittently or discontinuously gradually reaching a balanced flushing state. Hamill (1999) [2] carried out the scrub test of the seabed of the sand by the propeller wash at the quay dock. The test used four propeller models with different speeds, from the shore wall (the distance from the propeller to the maximum scour point when there is no structure) The prediction formula for the equilibrium scouring depth at the bottom of the bank is established by comparison with the scouring characteristics in the absence of structures (Hamill 1999 or Sumer and Fredsøe's 2002 [3][4] . Schokking [5] (2002) utilizes simple water jets, respectively. The ducted propeller jet and the free propeller jet test study the stability of the soil on the slope. He found that the stability of the soil during the jet flow of the ducted propeller is greatly different from the stability of the soil under the simple water jet. Therefore, it is not possible to replace the propeller jet with a simple water jet in the experiment. [6] (2005) studied the existence of the wall by the model test on the initial jet velocity and axial direction when the dock wall is 2 times and 3 times the propeller diameter respectively. Influence of flow rate attenuation, etc. Hamill [7] (2009) used model tests to study the effect of rudder on the water flow at the bottom of the seabed generated by the propeller. The method of using neural network to predict the flow velocity at the bottom of the seabed is proposed. Xie Zhikai [8] (2004) conducted the erosion test of the foundation soil of the wharf caused by the propeller water when the container ship was docked and offshore. Teresa [9] (2010) The effect of the rudder on the propeller wash flow field when the double-propeller is leaving the shore is studied by using a model scale of 1:16. It is found that the high-power ship can not ignore the tangential flow rate and radial direction generated by the propeller when operating in shallow water. The effect of the flow rate on the scouring. The above test methods are based on the continuous operation of the propeller, and do not consider the change of the scouring characteristics caused by the intermittent work of the propeller and the change of the characteristics of the soil particles. Therefore, it is necessary to carry out different soil particles. Intermittent scouring test study of propeller wash flow on the seabed.
In this paper, the continuous scouring test of propeller washing in nine kinds of seabed soils and the intermittent scouring test under 18 working conditions were carried out by physical model test. The characteristics of seabed soil and the interval of propeller were studied. The effect of sex work on balancing the depth of erosion. Finally, the prediction formula of the maximum intermittent scouring depth is established by using the experimental data.
test
The test was carried out in a water tank of 2.0 m in length, 1.2 m in width and 1.2 m in height.
The rotary motion of the propeller is generated by a motor with a power of 0.75 kW. Simulate a solid wharf surface with transparent tempered glass. The scouring test phenomenon can be observed in front of the transparent tempered glass. The scour terrain of the front edge of the physical wharf is measured using a three-dimensional topographic surveyer. A detailed description of the test equipment and instruments can be found in [10] .
Propeller parameters
In order to produce different propeller wash flow fields, three types of propellers were used in the test, and the diameters of the propellers were 70 mm, 130 mm and 150 mm, respectively. The characteristics of the propeller are described in detail in Table 1. The plane position and height of the propeller in the water tank are the same as in the literature [10] .
Soil samples
Two soil samples were used in the test. One soil sample is coarse sand (d50=0.7mm), and the other soil sample is fine sand. The nature of the first soil sample can be found in the literature [10]. The nature of another soil sample can be obtained from the parti-cle grading curve given in Figure 1. The median diameter of the soil sample ob-tained from the figure is that the particle content of the particle diameter greater than 0.075 mm is greater than 85% of the total weight.
Scour test
The continuous flush test and the intermittent flush test of the propeller wash flow were carried out for the two soil samples. Table 2 gives a detailed description of the conditions of the flush test. The continuous scouring test of each soil sample was carried out in 9 cases, and the intermittent scouring test was carried out in 9 cases under the intermittent scouring time of 2 min and 3 min, respectively. Specific experimental procedures and procedures can be found in the literature [10] . Distance to the pier(mm) (c)
Effect of intermittent work of propeller on maximum equilibrium scouring depth
In order to analyze the influence of the intermittent work of the propeller on the scouring depth, this paper takes out the working condition 2, the working condition 5 and the working condition 8 from 9 working conditions, that is, when the propeller is 900 mm from the wharf surface, the propeller diameter is 70 mm, 130 mm and At 150 mm, the maximum scouring depth at different intermittent scouring times is compared. Table 3 and Table 4 of the propeller has a greater influence on the erosion of the fine sand seabed than the coarse sand seabed. Table 4 The maximum scour depth in coarse sand in intermittent-scour case under different conditions (mm)
4.Prediction method of intermittent scouring depth
The prediction method of the maximum equilibrium scouring depth during continuous scouring is given in [2]. This method is also applicable to the continuous scouring situation in this paper. Reference [10] gives a prediction method for the intermittent scouring depth on the coarse sand seabed, but this method only considers one type of propeller in the process of obtaining, which makes the prediction method have certain limitations. In order to expand the applicability of the intermittent scouring depth prediction method, this paper uses the intermittent scouring test results on the fine sand seabed to fit. When processing the test data, the maximum equilibrium scouring depth during intermittent scouring and the horizontal distance from the propeller to the wharf surface are dimensionlessly processed, and then the experimental data is fitted. On this basis, the maximum equilibrium depth during intermittent Indicates the continuous scouring time; indicates the distance from the propeller to the wharf; indicates the maximum equilibrium scouring depth caused by continuous scouring, which can be calculated by the formula in [2] .
Conclusion
In this paper, the following conclusions can be drawn from the continuous scouring and intermittent flushing test of the coarse sand soil bed and the fine sand seabed on the front of the solid wharf by propeller washing: 1. Under the same test conditions, whether it is continuous scouring or intermittent scouring, the maximum scouring depth of the coarse sand seabed is larger than that of the fine sand seabed; and the farther the propeller is from the dock surface, the coarse sand is smaller than the fine sand sea. The greater the maximum increase in the depth of the bed.
2. When the propeller is working intermittently, the number of intermittent work will affect the maximum scouring depth of the front edge of the physical wharf. The increase of intermittent work times will aggravate the scouring phenomenon of the seabed.
3. Under the same test conditions, compared with continuous scouring, the maximum equilibrium scouring depth caused by intermittent scouring on the fine sand seabed is 5.1% to 37%, while the increase on the coarse sand seabed is 2%. To 4.8%. It can be seen that the effect of intermittent scouring on the fine sand seabed is greater than that of the coarse sand seabed. Therefore, for fine sand, it is more necessary to consider the impact of the intermittent work of the propeller on the flushing.
4. The formula for predicting the maximum equilibrium scouring depth on the fine sand seabed caused by the front edge of the solid wharf during the intermittent washing of the propeller wash is established. This formula is a dimensionless formula that can be generalized to the case of actual propeller wash intermittent flushing. | 2019-04-27T13:12:27.939Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "677f457517abb3d54c6bb9da68a765b53e4145a6",
"oa_license": null,
"oa_url": "https://escipub.com/Articles/AJSRE/AJSRE-2018-12-1805",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2ea50197857cca07c461e35c6cd67cd51b34a099",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
15723233 | pes2o/s2orc | v3-fos-license | Epigenetic changes during sepsis: on your marks!
Epigenetics is the study of how cells, organs, and even individuals utilize their genes over specific periods of time, and under specific environmental constraints. Very importantly, epigenetics is now expanding into the field of medicine and hence should provide new information for the development of drugs. Bomsztyk and colleagues have detected major epigenetic changes occurring in several organs as early as 6 h after the onset of a mouse model of multiple organ dysfunction syndrome induced by Staphylococcus aureus lung injury. Decrease in mRNA of key genes involved in endothelial function was found to be associated with (and potentially explained by) a decrease in permissive histone marks, while repressive marks were unchanged. We discuss here the limitations of a whole-organ as opposed to a cell-specific approach, the nature of the controls that were chosen, and the pitfalls of histone modifications as a cause of the eventual phenotype. While the use of ‘epidrugs’ is definitely welcome in the clinic, how and when they will be used in sepsis-related multiple organ dysfunction will require further experimental studies.
Inserm UMR_S 1155, "Rare and common kidney diseases, matrix remodelling and tissue repair", Hôpital Tenon, 75020 Paris, France Full list of author information is available at the end of the article is the first time that the community of intensivists has been provided with translational research exploring a major aspect of epigenetics (here, histone modifications) in sepsisrelated multiple organ dysfunction syndrome (MODS).
Epigenetics is a comparatively recent science, rapidly expanding in the field of medicine. Its definition has greatly evolved during the past century. At present it is defined as 'the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states' [2]. Physically, covalent modifications of histones (the proteins around which DNA is enwrapped, constituting nucleosomes) and methylation of DNA itself are two major biochemical changes strongly influencing how cells, organs, and even individuals make use of their genes over specific periods of time, and under specific environmental constraints. The reason for this is that some of these biochemical marks actually determine the accessibility of genes to RNA polymerase II and to relevant transcription factors. If we could identify if, when and how a given epigenetic mark, or a combination of marks, is induced by an injury, and also understand its real biological impact (good or bad) on the clinical outcome of that injury, then promoting or erasing these marks by using drugs -'epidrugs' -could potentially represent a major breakthrough. By its severity, and the wide repercussions on a number of organs, sepsis exemplifies an event where epigenetics could help to reach unmet needs. Just like goal-directed resuscitation, based on pathophysiology and now a source of therapeutic targets [3], epigenetic modifications could very well be the future for care of patients with MODS: epidrugs are coming.
Endothelial cell dysfunction is a hallmark of sepsis. Bomsztyk et al. give insights into how epigenetic modifications are involved by using a mouse model of MODS induced by acute lung injury by Staphylococcus aureus. In short, they observed at the early time point of 6 h that the decrease in mRNA of key genes for endothelial function was associated with a decrease in histone marks known to be permissive (that is, to facilitate transcription). As a comparison, repressive marks were not induced at this time point. This suggests that an early intervention preserving or restoring these epigenetic marks would help to preserve endothelial integrity during MODS.
In our opinion, however, three issues need to be addressed to ensure the validity of this promising approach. The first one is technical but of utmost importance: which cell subtype should be studied when considering the epigenome? In homeostasis, many cells coexist in every organ, each with a similar genome but with its own epigenome: it is the very definition of epigenetics. In sepsis, there is a massive infiltration of immune cells in target organs, which makes a cell-specific approach (as opposed to an organ-specific one) even more essential. This is exemplified by the Neutrophil gelatinase-asssociated lipocalin (NGAL), a well-known marker of acute kidney injury, yet abundantly produced by neutrophils [4]. Here, NGAL is shown to be upregulated in the three organs studied, namely the kidney, the lung, and the liver, but with different epigenetic patterns: permissive acetylation of lysine 4 and 9 of histone H3 is increased in exon 1 in the liver and lung, but not in the kidney. Conversely, repressive methylation of lysine 27 of histone 3 decreases in the lung while increasing in the kidney. This emphasizes the need to integrate the different epigenetic marks in order to not only understand the activity of polymerase 2 in this region of the genome, but also to elucidate which cell acquires or loses these marks. Short of a cell-specific approach, results are complex and potentially confusing. One could speculate that the NGAL gene is differentially regulated in renal and inflammatory cells. We encountered similar problems and recently proposed ex vivo cell sorting to circumvent this issue in the kidney, at least in experimental conditions [5].
The second point refers to the control arm: mice undergo neither anesthesia nor mechanical ventilation. The authors argue that a previous microarray did not show substantial decrease in genes of interest, but this does not imply that the epigenome is stable. In addition, anesthesia itself -here isoflurane -has been shown to modify histone marks [6]. We would like to see evidence that, at least for some genes, polymerase 2 is active in endothelial cells; the authors only show negative marks (a decrease in permissive histone modifications, with a decrease in RNA polymerase II density). As explained above, choosing NGAL as a positive control takes the focus away from the endothelium and is still compatible with vascular rarefaction. A 'positive' control induced by MODS in endothelial cells is required [7][8][9].
Finally, how to articulate the cause of injury, the studied phenotype (here, endothelial dysfunction), and epigenetic marks is a general challenge in this emerging field of medicine. This necessitates analysis of the chronology and biological mechanisms whereby epigenetic marks are being bound. Cell metabolism, cell cross-talk, cytokines, and pathogen motifs may all be at issue. Bomsztyk and coworkers have made an important contribution by showing evidence of differences in RNA polymerase II activity. Upstream, cell-oriented mechanisms during sepsis must be clarified if we want to devise new therapies at given time points. We envisage that the impaired energy metabolism observed in sepsis [10] could drive many epigenetic changes. Knowing the highly dynamic nature of epigenetics, with some marks being furtive and others durable, timing of observation surely plays a critical role in how we interpret data; hence the need to be cautious about the causality of pathological changes that might only be temporary. The functional impact of one specific mark is still uncertain.
Conclusion
Epigenetic studies of systemic diseases such as sepsisrelated MODS may help to understand how cell damage proceeds. Obviously, experimental models are essential to explore causality. However, precision regarding the cell subtype involved in epigenetic changes is mandatory, and the timing of analysis is crucial to determine when epidrugs could be incremented in the clinic. | 2017-06-22T12:27:56.823Z | 2015-10-15T00:00:00.000 | {
"year": 2015,
"sha1": "bf6ece2235ca74ee5295565f2fdf9a089e79e332",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-015-1068-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf6ece2235ca74ee5295565f2fdf9a089e79e332",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7867206 | pes2o/s2orc | v3-fos-license | Modeling and Experimental Methods to Probe the Link between Global Transcription and Spatial Organization of Chromosomes
Genomes are spatially assembled into chromosome territories (CT) within the nucleus of living cells. Recent evidences have suggested associations between three-dimensional organization of CTs and the active gene clusters within neighboring CTs. These gene clusters are part of signaling networks sharing similar transcription factor or other downstream transcription machineries. Hence, presence of such gene clusters of active signaling networks in a cell type may regulate the spatial organization of chromosomes in the nucleus. However, given the probabilistic nature of chromosome positions and complex transcription factor networks (TFNs), quantitative methods to establish their correlation is lacking. In this paper, we use chromosome positions and gene expression profiles in interphase fibroblasts and describe methods to capture the correspondence between their spatial position and expression. In addition, numerical simulations designed to incorporate the interacting TFNs, reveal that the chromosome positions are also optimized for the activity of these networks. These methods were validated for specific chromosome pairs mapped in two distinct transcriptional states of T-Cells (naïve and activated). Taken together, our methods highlight the functional coupling between topology of chromosomes and their respective gene expression patterns.
Introduction
The genetic material (chromatin) in eukaryotic cells has a multiscale three dimensional organization within the nucleus [1]. DNA is packaged around histone and non-histone proteins to form the 30 nm chromatin fibre [2]. This 30 nm fibre is further hypothesized to be organized into relatively open euchromatin and condensed heterochromatin structures based on post translational modifications of histone [3]. Imaging methods using whole chromosome probes (FISH) reveal the spatial dimension to genome organization in eukaryotic cells. These methods have suggested that chromatin is organized into well-defined chromosome territories (CT), in a tissue specific non-random manner [4][5][6][7]. These chromosome positions remain largely conserved during the interphase in proliferating cells [8][9][10]. In addition, whole genome chromosome conformation capture assays have shown intermingling of neighbouring CTs [11] as well as a model of the yeast genome organization [12]. Further on a smaller scale, these methods have demonstrated that the genes from neighbouring CTs loop out and are found to co-cluster with transcription machinery to form three dimensional interactions called active transcription hubs [13]. The intermingling of nearby CTs vary in concert with transcription and cellular differentiation [14,15], demonstrating the role of chromosome topology in genome regulation [16]. Individual gene labeling methods suggest that candidate gene clusters are spatially co-localized [17] and are coregulated for their specific transcriptional control [18][19][20][21][22][23][24]. Using 2D matrices of chromosome distances at prometaphase stage, the correspondence between co-regulated genes and chromosome positioning has been observed during differentiation [19]. However, methods to describe the correlations between threedimensional architecture of chromosome positions [25,26] and global gene expression as well as TFNs is largely unexplored.
In this paper, we present a quantitative approach to test the correlation between chromosome organization and transcriptional output of the cell. Inter-chromosome Physical Distance (IPD) matrix computed from chromosome centroids in interphase human male fibroblasts [27] revealed non random chromosome organization. Inter-chromosome Activity Distance matrix, constructed from the microarray data obtained for human fibroblast [28], suggested that chromosomes with similar gene activity were spatially clustered in a tissue specific manner. We formulate an energy optimization function, 'H' to elucidate the correspondence between the annotated TFNs [29] and spatial positioning of chromosomes. Numerical simulations of the H function, that relates the activity of genes of specific networks to their corresponding chromosomal positions, suggest the sensitivity in network topology. The prediction from our numerical methods were experimentally validated by correlating chromosome distances for specific pairs with their respective activity distances in two distinct transcriptional states of murine T-Cells (naïve and activated). Taken together these numerical modeling and experimental methods provide an important platform to probe the functional coupling between spatial organization of chromosomes and their epigenetic states.
Results
Methods to probe the correlation between the organization of chromosomes and their transcriptional activity 3D Chromosome FISH was used to map chromosome positions in two cell phases: interphase and prometaphase [27,30]. Based on these observations we extracted the coordinates of all chromosome centroids in human fibroblasts measured for 54 nuclei, as reported by Bolzer et al. [27], which is the only available full map of all chromosome positions. Inter-chromosome Physical Distance (IPD) matrices were constructed by the mean distances between centroid positions of 22 pairs of autosomes ( Figure 1A) as: where r i = (x i , y i , z i ) and r j = (x j , y j , z j ) are the coordinates of chromosome i and chromosome j, respectively and , . denotes averaging over the 54 nuclei. Figure 1A shows Inter-chromsome physical distance between ith and jth chromosome in the nucleus, which represents the (i,j) th element in the IPD matrix. The IPD matrices were constructed for interphase (IPD fib , Figure 1B), prometaphase (IPD prometaphase , Figure 1C) and randomized nucleus (IPD rand , Figure 1D). The values of diagonal elements of all the matrices, which represent mean distance between homologues, are kept minimal and are not considered for any further correlation analysis. Further, regions of low IPD values and high IPD values are observed in the IPD matrices for interphase and prometaphase ( Figure 1B and 1C), suggesting contribution of chromosome size (in total number of base pairs) which decreases from chromosome 1 to 22 ( Figure S1A). The volume of a given chromosome changes dramatically in interphase [31] due to changes in epigenetic modification and subsequent transcriptional states. Hence IPD is an average of chromosome centroids in all such conditions showing spatial clustering of chromosomes. Such clustering was not observed in the randomized nucleus, where these matrices were generated by randomly swapping the rows and columns of IPD matrix multiple times and hence permuting the identities of the chromosomes in the interphase IPD matrix (Methods). Pearson Correlation Coefficient (PCC) estimation between initial IPD and progressive randomization showed significant decrease in PCC values after 30 such permutations ( Figure S2). Interphase and prometaphase IPDs were found to be positively correlated ( Figure 1E) with PCC of 0.904 ( Figure 1G), whereas IPD fib was uncorrelated with the randomized position matrix (mean PCC of 0.17 with standard deviation of 0.13, computed over 10,000 randomized matrices) ( Figure 1F -a representative scatter plot, 1G and Figure S3), confirming the nonrandom organization of the chromosomes in interphase cell nucleus.
To probe the possible correlation between the chromosome positions and their gene activity (Figure 2A), we generated an Inter-chromosomal Activity Distance (IAD) matrix for fibroblast from the microarray data ( Figure 2B) obtained from Goetze et al [28]. Figure 1A shows the schematic of gene activities being classified to i th and j th chromosomes, which is then further used to compute the (i,j) th value in the IAD matrix. From the microarray data, genes were grouped into their respective chromosomes and the mean logarithmic chromosomal activities (Achr) were obtained (Methods). The density of genes on a chromosome does not correlate with the length of the chromosomes ( Figure S1A). For instance, chromosome 18 is larger than chromosome 19, but the former has smaller number of genes as compared to the latter ( Figure S1B). Considering this, chromosomal activity was computed by normalizing total activity of all the genes by the annotated number of genes and not by the chromosome size. Interchromosome Activity Distance (IAD) was then computed as: Use of logarithmic scale captures expression levels over several orders of magnitude. Lower IAD values, shown by cooler colors in the heat map, represent -pairs with similar transcriptional activity, whereas warmer colors represent higher IAD or dissimilar chromosomal activities, as seen in Figure 2B. The correlations between IAD fib and IPD fib matrices at interphase ( Figure 2C), revealed a positive slope (1.23) and PCC (0.58, Figure 2E) with a small false discovery rate (FDR) 0f 0.11 ( Figure S4 and Table S1). To probe the specificity of this correlation, we used the IPD at prometaphase (IPD prometaphase ) as a negative control. Indeed, we obtained a lower slope (0.44) and PCC (0.27) when IAD fib (for interphase) was correlated with IPD prometaphase (Figure S3 B), with a larger FDR ,0.29 (Table S1 & Figure S4C) suggesting that IPD at interphase is more correlated with IAD at interphase. Correlation with randomized matrix exhibited negative slope (20.37, Figure 2D) for a typical randomized matrix ( Figure 1D) and even smaller average PCC of 0.12 ( Figure 2E & Figure S3D) further indicating the non-randomness in the correlation. To probe the effect of chromosome size on the observed correlation between IPD and IAD, we generated a matrix of chromosome basepair length differences (Interchromosome Basepair Distance, IBD) (Methods and Figure S1C). This matrix showed some degree of similarity to IPD and correlated well with the IPD matrix (PCC ,0.54) (Figure S1D & S1F). But when IBD was correlated with the IAD matrix, a very weak correlation of PCC ,0.15 was observed ( Figure S1E and S1F). This suggested that though chromosome size contribute to the observed pattern in IPD, the correlation between IPD and IBD were not influenced by the chromosome sizes. The Interchromosomal Activity Distance matrix used in the correlations was generated by computing the mean of the genes present in the chromosome. This takes into account all the genes irrespective of their activity level. To probe the correlation due to a small subset of genes on the chromosome, which will correspond to smaller active regions of chromosomes (as the active genes are not uniformly distributed throughout the length of chromosome), we selected genes (,25% of the genome) which are highly expressing in each chromosome by applying a threshold (more than 40% of the mean chromosome activity) in the gene expression. We generated the IAD matrix ( Figure S5A) from these selected genes (IAD select ) and calculated the correlations. The correlation obtained after selection of genes was similar to the correlation when all genes in the chromosome were used ( Figure S5B & S5C), suggesting that the correlation is not due to whole chromosome averaging. These results suggests that the mean distances between chromosomes are more correlated with gene activity distances in fibroblasts at interphase as compared to prometaphase and are uncorrelated with random organization.
Methods to identify cell-type specific gene expression profiles and its correlation to chromosome positions Different cell types in an organism are characterized by their distinct transcriptomes. Correlation of gene expression to chromosome organization implies that cell types will differ in the positions of the chromosomes, such that the spatial organization of a given cell type exhibits larger correlation with its own expression pattern. In the presence of cell type specific correlation between IPD and IAD, the correlation should be smaller when IPD fib is correlated with IAD of other cell types. To further extend our approach to test such cell type specific correspondence, we correlated IPD fib of fibroblast (for interphase) and IADs of fibroblast, lung endothelial cells, oocyte and Human Umbilical Vascular Endothelial Cells-HUVECs. As the IAD fib of fibroblast at interphase correlated the most with the IPD of fibroblast at interphase, IPD of fibroblast with interphase was further used for testing the cell type specific correlation.
From the transcriptome of different cell types, cell type specific genes were selected by excluding similarly expressing genes in pair wise comparison with fibroblasts to generate IADs ( Figure 3A). Two activity matrices were generated for each pair of cell type compared: (a) IAD other-fib -computed from the activity of genes in the other cell type which are differentially expressed in comparison to fibroblasts (Methods) and (b) IAD fib-other -computed from activity of the same genes selected above, in fibroblast. Such activity matrices were computed for each of the three pairs, fibroblast-lung ( Figure 3B), fibroblast-oocyte and fibroblast-HUVEC (Methods , Table S2, and Figure S6A & C). Figure 3C depicts the difference matrix of IAD fib-lung and IAD lung-fib for the differentially expressed genes of fibroblast and lung cells. Figure S6B & D shows difference matrices for other cell type pairs. The PCCs were higher when IPD fib was correlated with IAD fib-other , whereas the PCCs were comparatively smaller when IPD fib was correlated with IAD other-fib ( Figure 3D, Figure S6, Figure S7, Table S1).
These observations suggest that the association between the chromosome topologies and transcription maps is indeed cell type- Figure S8).
Numerical simulation to probe the coupling between chromosome positions and transcription factor networks Genome-wide chromatin interaction experiments have suggested preferential association of genes co-regulated by similar transcription factors [32]. Such cis-(same chromosome) and trans-(different chromosome) associations have also been shown for coregulated genes at post transcriptional level at other nuclear bodies [33,34], suggesting spatial association of genomic elements to facilitate function. In order to probe such associations we devised an energy optimization function and numerical simulation technique which linked the chromosome positions (IPD) to the co-regulated TFNs. In particular, we examined to what extent two chromosomes participate in the same transcription network, tend to be close by. For this purpose, we constructed a function H, which measures if nearby chromosomes contain co-regulated genes which belong to a particular functional transcription factor network ( Figure 4A). This approach eliminates the whole chromosome averaging that we performed while computing the correlations between IPD and IAD, and considers the activities of only genes which are regulated by a particular transcription factor. However, the position information of genomic elements currently available are at the resolution of chromosomes, leading to coarse graining of this energy optimization function at similar length scales. H takes into account both spatial arrangement of chromosomes and the activity of the 87 known annotated TFNs [29], and quantifies how well they correspond to each other. The spatial part of H is represented in terms of an adjacency matrix ( Figure 4B), The parameter l, is the distance parameter used to scale the distances to the length scale of chromosomes. The part of H which involves the contribution from transcription factor networks is introduced as a network matrix ( Figure 4C) which is defined as, where I cell if is the integrated microarray intensity of genes present in the i-th chromosome that participate in network ''f'' in cell-type ''cell'' of the four cell types. Similar to definition of the IAD, logarithmic scale captures the different orders of magnitudes of gene expression. The TF networks which form the network matrix vary from very small networks (,10 genes) to large networks (.300 genes, Figure S9A). To characterize the TF networks for variability in their sizes, we computed the occupancy of chromosomes for each TF network ( Figure S9B), which is defined as the fraction of total number chromosomes which have at least one gene from the TF network. Large TFNs have occupancy ,1 suggesting that the target genes of these TFNs are scattered throughout the genome, whereas, smaller TFNs (,50 genes) have occupancy ,0.5, suggesting that their target genes are clustered on a few chromosomes. But this clustering of genes of a TFN are not biased by chromosome size, i.e the genes are present on smaller as well as larger chromosomes ( Figure S9A). Further larger chromosomes and gene rich chromosomes were observed to be associated with a larger number of TFNs ( Figure S10 and Table S3). The function H, which has contributions from spatial organization of chromosomes and the activity of transcription factor networks, is given by, H is obtained by summing over all networks f for all possible pairs i-j of chromosomes, weighted according to the proximity of the chromosomes provided by the adjacency matrix, A fib ij . The distance parameter l, weights the IPD values, such that smaller IPD values attain larger adjacency and vice versa. Moreover l makes a sharp distinction between nearby and distant chromosomes. For each pair i-j of chromosomes, we examine the similarity in the expression levels of genes that belong to a certain network f, by summing their squared difference W cell if -W cell jf 2 (this ensures the contribution from each pair is positive) and tends to zero for similar activity. The matrices are defined such that when the organization of the chromosomes is correlated with the activity We used numerical simulations to test the above hypothesis. Before performing the actual simulations, we estimated the optimal value of the distance parameter, l to be ,7% of the nuclear radius, and it provided the largest increase in the value of H ( Figure S11A). We used this distance parameter for all the numerical solutions. To probe the optimality of the value of H, we simulated different configurations of chromosome organization by randomizing the adjacency matrix. Large deviations indicate that the actual configuration is optimal.
Following this procedure, we find that DH fib (DH cell for fibroblast) for fibroblast is 1.97, implying a p-value of 0.05 ( Figure 5B, inset). This indicates a rather small probability of obtaining a superior configuration through random reorganization of chromosomes in fibroblasts. The network matrix of a given cell type represents its characteristic transcriptional program and the resulting transcriptome. One therefore expects that the coupling between the physical organization and transcription networks will be cell type-specific. Since the spatial part of H is taken from fibroblasts, namely the adjacencyA fib ij , it should exhibit better fit to fibroblast network activity,W fib if , than to the activity of the other cell types. In accord, the deviations for the other three cell types were lower than that of fibroblasts, the values being DH lung = 0.92, DH HUVEC = 1.02 and DH oocyte = 0.71. These correspond to pvalues of 0.35, 0.31 and 0.48, respectively, indicating that H values obtained for different cell types are not significantly different from the values obtained for a random configuration of chromosomes. Further, the obtained p-values were independent of the mode of simulation; exclusion of homologues from the simulation and non cumulative randomization resulted in similar p values ( Figure S12 and Methods). These results suggest that IPD fib of fibroblast fits better to its own transcriptomes than to those of other cell types.
To analyze the sensitivity of individual TFNs, H values were computed by randomization of the adjacency matrix for the chosen network. The evolution of the H value for the first 200 iterations was plotted for each network. ,70% of networks showed an increase in the H value ( Figure 5C) whereas remaining 30% showed a small decrease ( Figure 5D). This indicates the differential contribution of the networks towards optimization of the coupling between TFNs and chromosomal organization. Larger increase in H value for 70% of the networks is in accordance with increase in the value of H when all networks are considered. Figure 5E shows a list of networks that exhibited maximal changes in H values, indicating its sensitivity to perturbations in chromosomal positions. To probe the contribution of TF networks towards differential increase in the value of H, we correlated the change in H value, DH/H 0 with the number of genes in the TF network considered for simulation ( Figure S13A). It was observed that DH/H 0 and the number of target genes of a TF network were inversely correlated with decreasing degree of correlation with increase in the distance parameter l ( Figure S13B). As previously observed, the occupancy of the chromosomes has an exponential dependence on the number of target genes of a TF network. This indicates that a large increase in H results from small TFNs, with target genes clustered over a small number of chromosomes. These results indicate that large TFNs which have genes present on all the chromosomes probably regulate housekeeping genes and hence do not contribute strongly towards cell type specific responses.
Experimental validation
Our numerical approaches suggested that spatial arrangements of chromosomes in a given cell type (human fibroblasts) is optimized to its expression pattern better than it fits to the expression patterns in other cell types. To further validate the results from proposed numerical approaches, we experimentally tested the correlation between chromosome positioning and gene expression in another cell type of a different mammal, murine T cells, in two distinct transcription states of naïve and in vitro activated T cells, where the global mRNA levels increase by ,5 fold [35]. We generated IAD for both the naïve and activated T cells (IAD Naive and IAD Activated ) from genome-wide microarray data (GEO accession number GSE30196) obtained from our experiments ( Figure 6A & B). The microarray was done in duplicates. IPD was estimated for candidate pairs of chromosomes 1-3, 1-4, 1-6, 3-17, 4-17 and 13-17 (which harbor 30% of the differentially expressed genes identified in the microarray) by 3D FISH performed in naïve and activated T cells (IPD Naive and IPD Activated ) ( Figure 6D). The cells used for estimation of IPD were obtained from different batches of cell purification, using similar methods as for IAD estimation. The homogeneity between the cells isolated from two different mice was quantified by comparing differences in number of differentially regulated genes at similar conditions from two biological replicates. Figure S14 shows that the number of differentially regulated genes are more than ten folds higher in between different states of T Cell (e.g. naïve NC1 and activated NC2) isolated in same batch when compared to the number of differentially regulated genes, between biological replicates of same cell type (e.g. naïve NC1 and naïve NC2). IAD matrices computed from the biological replicates also showed very small variations (measured as matrix of standard deviation between IAD values generated from two replicates) indicating that different batches of cells does not significantly alter the IAD matrix ( Fig S15). The IPD which we used in earlier correlations, was generated from the centroid positions of the chromosomes obtained from 3D chromosome FISH (Fig 6D and Fig S16). Inter-centroid distances are biased by the size of the chromosomes. Two large chromosomes will tend to have their centroids farther apart as compared to small chromosomes, even though the distance between the chromosome surfaces may be same. To overcome this drawback, we generated the IPDs for specific pairs of T Cells by measuring the distance between the chromosome surfaces. The IPDs were generated using minimum interface distance among all the four possible interface distances between the pair. ( Figure 6E) The IPD of Naïve and Activated T cells were correlated with their respective IADs and IAD Muscle of murine muscle cells ( Figure 6C). The obtained PCC values using IPD and IAD (Table S4) of both naïve (0.28) and activated states (0.46) were higher in contrast to PCC computed between IAD Muscle and IPD of T cells (0.002-IPD Naïve , 0.27-IPD Activated ). Interestingly, the differences in the correlation coefficients are similar (0.2820.002 = 0.278 and 0.4620.27 = 0.19) for both naïve and activated T cells when compared to muscle cells. These results indicate that correlation between chromosome organization and their transcriptional output is a general phenomenon which can be observed in multiple cell types.
Discussion
Random loop polymer models have been extensively used to understand the internal architecture of chromosomes. These studies suggest a gene expression based looping probability of the chromatin fibre which leads to formation of functional DNA domains and confinement of chromatin to chromosome [36][37][38]. Transcriptional activity based chromosome intermingling has also been used to explain the frequent juxtaposition of certain pairs of chromosomes and the resulting chromosomal translocation [14]. But very few methods are present to quantitatively measure the correlation between the physical proximity of chromosomes and transcriptional activity [16]. In this work we propose methods to probe the correspondence between the chromosome positions and global gene expression program. While chromosomes have been found to be radially distributed from the nuclear centroid according to their gene density [39][40][41], our methods were able to assess a further layer of three dimensional organization, in which the relative chromosome positions correlated with gene expression. Previous studies suggest both random [42,43] and non random [27,44,45] chromosome positions, whereas our results and analysis revealed the non-random organization within the fibroblast nucleus [27]. The IADs computed from the microarray data showed correlation between relative chromosome activities and their respective positions. These correlations support the coclustering of genes for transcriptional control [21] by a fewer number of observed transcription factories [13,15]. The observed correlations may also have contribution from noise due to coarse graining and population averaging of the chromosome positions and their activities. Further, noise in chromosome activity measurements could also be contributed by the matured mRNA which does not exactly represent the stochastic nature of short lived nascent mRNA transcripts produced at the sites of transcription at single cell level [46,47]. The correlations can be improved by extending the methodology described here to build a more detailed IPD for smaller continuous regions of the chromosome and their corresponding IADs. Our evaluation methods in different cell types suggested that arrangement of coclustered [48] genes must be cell type specific as we find a lack of correlation between chromosome positions of one cell type with the gene expression program of another cell type . The cell type specific transcriptional programs are usually turned on by cell type specific TF networks [49], suggesting their involvement in modulating inter-chromosome interactions. Previous simulation and modeling work on the role of transcription factors in organization of genome in E. coli suggested formation of DNA regulatory domains of co-regulated TF target genes [50]. Similarly in yeast, target genes of TFs were shown to be preferentially coclustered on the same chromosome [51]. In this work we have taken this idea further to suggest role of TF networks in elucidating relative chromosome proximities in nucleus. Numerical simulations shown here suggest that the activity of TFNs was correlated with relative positions of chromosomes. An optimality measure H was devised to quantitatively understand the coupling between 3D chromosome positions and TFNs. Our predictions of the correspondence between chromosome positions and global gene expression were experimentally validated in naïve and activated states of mouse T-cells. These results evidenced correlations between the IPD and IAD of T cells whereas smaller correlations were observed between IPD of T cells and IAD of muscle cells.
Taken together, our methodologies were able to quantify correspondence between global gene expression program and three-dimensional architecture of chromosome positions. While co-clustered genes have been shown to be co-regulated [21,52,53], methods proposed here take these findings to the large-scale organization of the nucleus where transcription dependent intermingling of proximal chromosome territories may become feasible. Interestingly, these correlations are found both at the scale of transcriptome and at the scale of separate transcriptional networks [54]. Our findings suggest that the observed correlations between relative chromosome positions and transcriptional output are specific to a given cell type. The measured correlations are at steady state and with time averaged expression profiles, which smears the time resolved correspondence between chromosome positions and transcriptional activity. However, mechanistic insights into the origin of such correlations could be gained if such correlations are observed during the process of differentiation. Such refinements in IPD and IAD at single cell resolutions can in future yield better insight about contribution of transcription in relative chromosome organization. In addition, the chromosome position in the nucleus is a result of integration of many functional and spatial organizational cues like epigenetic modifications [55], transcription machinery density, post transcriptional or replication requirements. The methodology presented here can be easily adapted to further investigate the contribution of these factors by quantifying them at similar resolution as IPD for chromosomes. The general mechanisms of chromosome topology [11] and their functional links will become apparent as one simultaneously probes the temporal evolution of these correlations through the process of cellular differentiation and its maintenance through cell cycle.
Current methods to evaluate chromosome positions and its impact on gene expression have remained empirical. The introduction of 2D matrix for chromosome positions enabled an analysis of transcriptional changes through cellular differentiation [16]. Our methods further establish the coupling between chromosome positions optimized for a given cell type on a quantitative framework. By implementing comparative analysis methods between chromosome position matrix and activity matrix we were able to evaluate the coupling between TFNs and chromosome organization. Our proposal of a phenomenological analytical function (H), allow a systematic numerical simulation of correlations relating the TFNs and chromosome topology. The function H could be modified to include epigenetic modifications or active RNA polymerase interactions to construct an activity distance matrix of these parameters. This method could be adapted to chromosome sub-domains by painting smaller regions of chromosomes or using contact probabilities from chromosome capture assay [11] and correlating them with the corresponding activity distance matrix of these parameters. These matrices may further provide correlations at a finer resolution of gene clusters and their correspondence with transcription. We suggest that our methods describing interfaces of CTs in conjunction with chromosome capture assays may also facilitate identifying cell type specific functional gene clusters. The methods described in this work could also be useful in establishing correlations between three dimensional organizations of chromosome positions with other functional networks like signaling networks and chromatin remodeling networks.
Ethics Statement
All experiments involving animals were performed with the approval of the Institutional Animal Ethics Committee at National Centre for Biological Sciences, Bangalore headed by Prof. Mathew with committee members Professors Upinder Bhalla, Sumantra Chatterjee, MM Panicker and R. Sowdhamini. Approval ID for the project is AS-5/1/2008.
Inter-chromosome distance (IPD)
The Inter-chromosomal Physical Distance (IPD fib ) for fibroblast was obtained from the chromosomal distances as: Where r i and r j are the chromosomal distances from the nuclear centroid (in units of nuclear radius) in case of interphase and from centre of the prometaphase ring in case of the prometaphase chromosomes, obtained from Bolzer et al. [27]. Each element in the matrix is calculated from the mean of four possible distances between two pairs of homologous chromosomes (as the current experiments cannot distinguish between two different homologues of same chromosome), and further averaged over the 54 nuclei. Similar IPD matrices were constructed using the MDS distances (IPD MDS ) and minimum distances between the four possible values between two pairs of homologous chromosomes (IPD Min ).
IPD Randomization procedure
The interchromosomal physical distance, IPD rand for a random configuration of the nucleus was obtained from iterative swapping of the chromosomes in the fibroblast nucleus, by shuffling the rows and columns of the IPD fib matrix for 200 iterations ( Figure S1), which was sufficient for complete randomization, i.e. loss of chromosome position information from the initial configuration. The randomization process was designed to obey the triangle inequality, a basic property of Cartesian metric space, as no new spatial coordinates are created other than the actual r i already present. Rather, the rows and columns of the IPD matrix were interchanged in a cumulative fashion, with each shuffling performed on the previously shuffled matrix.
Inter-chromosome Basepair length Differences (IBD) Matrix
Differences in chromosome basepair length size was represented as Inter-chromosome Basepair length difference (IBD) matrix defined as Where Chrbp(j) and Chrbp(j) are the basepair lengths of chromosome i and j respectively.
Inter-chromosome Activity (IAD) Matrices
The Inter-chromosome Activity Distance (IAD) was created from the microarray data of fibroblast, obtained from Goetze et al. [28] (GEO accession no.-GSM157869). In the microarray there are multiple probes for a single gene. The activity of a gene in arbitrary units was obtained by calculating the mean intensity of the multiple probes for a given gene. The genes were further grouped into individual chromosomes and the activities were integrated over the whole chromosome. The total number of genes in each chromosome was obtained (http://vega.sanger.ac.uk/ Homo_sapiens/index.html, as on Nov 11, 2008) and used to estimate the mean chromosome activity as: Where x ik denotes the activity of k th gene in the microarray for chromosome i and N is the total number of annotated genes in a chromosome with summation done over all genes in the microarray for a particular chromosome. Logarithmic activities were obtained for each chromosome to account for the large dynamic range of the gene expression data. The IAD matrix was generated from the logarithmic chromosome activity as: To generate the IAD matrices for additional cell types the microarray data for lung, oocytes and HUVEC were obtained from Gene Expression Omnibus website (http://www.ncbi.nlm. nih.gov/geo). The accession number for the datasets are GSM101102 (lungs cells) [56], GSM288812 (oocytes) [57] and GSM215557 (HUVECs) [58]. The chosen microarray data was performed on Affymetrix GeneChip and MAS5 algorithm was used to calculate probe intensity, identical to that of fibroblast microarray data. In order to normalize the intensity variation between different cell types the microarray data was normalized to the mean of the probes in the array.
where (x_norm) i is the normalized activity of the probe and x x is the average activity of the probes in the array. The activity of a gene was calculated by taking the mean of the normalized probe intensities. To evaluate cell-type specific activities, cell types were considered as pairs (fibroblast-lung, fibroblast-HUVEC and fibroblast-oocyte). For example, if fibroblast and lung is considered as a pair, the differentially expressed genes between the two cell types were selected based on the expression level differences of corresponding genes in the two cell types. Genes with expression level difference of more than 1FWHM (Full Width at Half Maxima, calculated from the difference histogram) were selected. Hence, the activities of same genes from both cell types were compared. Further, these genes were partitioned into their respective chromosomes and the IAD matrix for each cell type was computed with their respective logarithmic activities (similar to IAD fib ). The two resulting matrices were named as IAD fib-lung , if activity is computed from fibroblast microarray data and IAD lungfib , if the activity is computed from microarray of lung cells. This ensures that the number of genes selected for a given chromosome is same in both the cell types
Adjacency and network matrix
The physical space of chromosomes was represented by the IPD fib matrix. To enhance the sensitivity of the chromosome positions, an adjacency matrix was generated with weights for each inter-chromosome distance as: Where, the l in the exponential is the distance parameter which was varied from 2-80% of typical nuclear radius, to find the optimum l. Upon variation of l, the increase in H-value was computed ( Figure S7) and the l which showed maximum increase was selected as the optimal l for the simulation. This functional form of adjacency matrix with a steep slope is sensitive to change in chromosome positions. The exponential form was used to detect small deviations in chromosome organization from the optimum configuration. The network space was represented by a network matrix W cell if , consisting of 87 annotated transcription factor (TF) networks obtained from Transcription Regulatory Element Database database (TRED), Jiang et al [56]. The genes in the TF network were identified in the microarray data for human fibroblast, lung, oocyte and HUVEC cell types and grouped into chromosomes. Chromosomal activity of genes involved in a particular TF network was obtained by calculating the natural logarithm of the integrated activities of all the identified genes in a chromosome. The chromosomes in which no genes were identified, a value of zero was assigned for that chromosome in the network matrix. A column vector was created for individual networks, where column f represents single TF network. The 87 column vectors were aligned to obtain the network matrix for the TF network, where f is a network and i a chromosome. All abbreviations used are listed in Table S5.
Mouse T-cell Microarray and Analysis
For T-cell experiments, cells were isolated from spleens of C57/ Bl6 mice. All animals were bred and maintained in the NCBS animal house facility. Experiments were performed with the approval of the Institutional Animal Ethics Committee at NCBS, Bangalore, India. CD4 + naïve T-cells were isolated from spleen of 8-10 week old mice using MagCellect isolation kit (R&D Systems, MN, USA) activated for 36 hours using aCD3-aCD28 coated beads (Invitrogen, CA, USA). This method consistently produces 90-98% pure populations of T cells (according to manufacturer's protocol). Microarray experiments on cells, purified in our laboratory at NCBS, Bangalore, were carried out at Genotypic (Bangalore, India). Duplicate experiments were done for both naïve and activated T cells. RNA extraction was done using RNeasy Minikit (Qiagen, UK), concentration and purity were determined using NanodropH ND-1000 spectrophotometer (Na-noDrop Technologies, Wilmington, DE, USA). The integrity of RNA was verified on an Agilent 2100 Bioanalyzer using the RNA 6000 Nano LabChip (Agilent Technologies, CA, USA). Equal amounts of RNA was labeled using Agilent dye Cy3 CTP and hybridized to Mus musculus GeneExpression Array 4X44K (AMADID -014868). The slides were scanned using Agilent Microarray Scanner G2505 version C at 2 mm resolution, and data was extracted using Feature Extraction software v 9.5 of Agilent. Though the microarray was performed on two batches of T cells obtained from two different mice, the gene expression across replicates was very similar as compared to the gene expression between naïve and activated T Cells, and hence it does not introduce significant noise in the analysis.To further minimize the noise in estimation of genome wide expression profile, mean of probe intensity for two duplicates was computed. Mean of intensities of probes for the same gene was calculated to obtain the activity per gene. Further, genes were grouped into chromosomes and IAD matrix was computed as explained earlier.
The microarray data is MIAME compliant and accessible on GEO website (http://www.ncbi.nlm.nih.gov/geo/; accession number GSE30196).Microarray data of murine muscle cells was obtained from Gene Expression Omnibus (GEO accession number GSM247205) and used as a negative control.
Chromosome painting and image analysis
For chromosome painting experiments, cells were stuck on PDL coated slides followed by fixation in 4% PFA for 10 minutes. PFA was neutralized with 0.1M Tris-HCl and then the cells were washed and permeabilized with 0.5% Triton X-100 for 8 minutes. This was followed by incubation in 20% glycerol for 1 hour and then 5 or 6 freeze-thaw cycles in liquid nitrogen. After this, cells were treated with 0.1N HCl for 10 minutes, washed and equilibrated in 50% Formamide/2X SSC overnight at 4uC. Hybridization was set up the following day. Cells were denatured in 70% formamide/2X SSC at 85uC for 2 minutes and then incubated with the fluorescently labeled mouse whole chromosome FISH probes (Cambio, Cambridge, UK) for 2-3 days in a moist chamber at 37uC with shaking. At the end of the incubation period, slides were washed thrice each in 50% Formamide/2X SSC at 45uC and 0.1X SSC at 60uC. Cells were counterstained with Hoechst 33342 (Sigma, USA) and then mounted with Vectashield (Vector Laboratories, CA, USA), sealed with coverslip and imaged on a Zeiss 510-Meta confocal microscope.
Inter-chromosome interface distances were computed using a custom written program in LabVIEW (National Instruments, TX, USA). Confocal Z sections for each chromosome were thresholded using (mean+standard deviation) of the fluorescence intensity of the Z-stack. Edge detection algorithm was used on each thresholded confocal section to obtain coordinates of chromosomal edge at each z plane. The three dimensional distance between the edges of two chromosomes were computed. The interface distance between two chromosomes was estimated by calculating the minimum distance, out of all the distances computed between the edge coordinates of the two chromosomes. Since two pairs of chromosomes are labeled in each nucleus, four interface distances were obtained. Average of the four interface distances for a given pair of chromosomes was then used as IPD mean for the given pair of chromosomes.
Statistical analysis
False discovery rate (FDR) was employed as a statistical measure to test the significance of the correlations obtained (Figures S3 & S5 and Table S2). The actual pearson correlation (PCC) value in each case was denoted as PCC 0 . To compute the FDR, 10 5 randomized matrices (from either IPD fib or IPD prometaphase ) were generated (each randomized matrix was computed by permuting its rows and columns) and each randomized matrix was correlated with matrix under consideration. A histogram of all the PCC value for the correlation between the matrices was generated and the instances with PCC.PCC 0 . FDR was estimated as the fraction of instances with PCC.PCC 0 | 2016-05-17T14:39:48.202Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "2812769c908b7f3e4aa3a82e0d7851f2b1b73b77",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0046628&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2812769c908b7f3e4aa3a82e0d7851f2b1b73b77",
"s2fieldsofstudy": [
"Biology",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18452749 | pes2o/s2orc | v3-fos-license | Two Dosing Regimens of Certolizumab Pegol in Patients With Active Rheumatoid Arthritis
Objective To investigate clinical efficacy and safety of 2 certolizumab pegol (CZP) maintenance dosing regimens plus methotrexate (MTX) in active rheumatoid arthritis (RA) patients achieving the American College of Rheumatology 20% improvement criteria (ACR20) after the CZP 200 mg every 2 weeks open-label run-in period. Methods DOSEFLEX (dosing flexibility) was a double-blind, placebo-controlled randomized study with an open-label run-in phase. During the run-in phase, all patients received CZP 400 mg (weeks 0, 2, and 4) and 200 mg every 2 weeks to week 16. Week 16 ACR20 responders were randomized 1:1:1 at week 18 to CZP 200 mg every 2 weeks, 400 mg every 4 weeks, or placebo. Results A total of 209 (of 333) patients were randomized at week 18 (CZP: 200 mg, n = 70; 400 mg, n = 70; placebo, n = 69). Groups had similar baseline characteristics (week 0). Week 34 ACR20 response rates were comparable between the CZP 200 mg every 2 weeks and the 400 mg every 4 weeks groups (67.1% versus 65.2%), which was significantly higher than placebo (44.9%; P = 0.009 and P = 0.017). ACR50/70 and remission criteria were met more frequently in CZP groups than placebo at week 34, with similar responses between anti–tumor necrosis factor–experienced and naive patients. Improvements from baseline Disease Activity Score in 28 joints using the erythrocyte sedimentation rate and Health Assessment Questionnaire disability index scores were maintained in CZP groups from week 16 to 34 while worsening on placebo. Adverse event (AE) rates in the double-blind phase were 62.9% versus 60.9% versus 62.3%; serious AE rates were 7.1% versus 2.9% versus 0.0% (CZP 200 mg, 400 mg, and placebo groups). Conclusion In active RA patients with an incomplete MTX response, CZP 200 mg every 2 weeks and 400 mg every 4 weeks were comparable and better than placebo for maintaining clinical response to week 4 following a 16-week, open-label run-in phase.
INTRODUCTION
Anti-tumor necrosis factor (anti-TNF) agents represent a major improvement in rheumatoid arthritis (RA) treatment (1)(2)(3). Although efficacy and safety remain the primary factors in selecting treatments, convenience of administration is also an important consideration. Patient surveys report that subcutaneous therapies are the preferred choice as they can be administered at home. Furthermore, re-search has shown a preference for therapies that can be administered as infrequently as possible (4,5).
Certolizumab pegol (CZP) is a PEGylated, Fc-free anti-TNF agent approved in Europe and the US for the treatment of adult patients with moderate to severe active RA (6). The current recommended dose for CZP therapy is a loading dose of 400 mg at weeks 0, 2, and 4, followed by a maintenance dose of 200 mg CZP every 2 weeks (7,8). The maintenance dosing regimen of CZP 400 mg every 4 weeks is approved in the US and Europe, providing dosing flexibility and the convenience of less frequent dosing for some patients. Clinical trials have compared the safety and efficacy of CZP dosing regimens of 200 mg every 2 weeks and 400 mg every 2 weeks versus placebo (7,9), and CZP 400 mg every 4 weeks has also demonstrated efficacy, both in combination with methotrexate (MTX) (10) and as monotherapy (11). This is the first study to date to compare the maintenance therapy regimens.
Limited data from clinical trials exist on the efficacy of second and subsequent biologic therapy in patients who require a switch from their initial anti-TNF agent (12). In this study, the impact on treatment by prior anti-TNF use is also considered.
PATIENTS AND METHODS
Patients. Eligible patients were age Ն18 years, with a diagnosis of adult-onset RA (6 months-15 years); all had moderate to severe active RA insufficiently controlled by MTX. Patients must have had active disease, defined by Ն6 tender joints, Ն4 swollen joints (of 28 joints), Ն10 mg/ dl C-reactive protein level and/or 28 mm/hour erythrocyte sedimentation rate (ESR), and be rheumatoid factor or anti-cyclic citrullinated peptide antibody positive. All had Ն3 months MTX treatment (10 -25 mg/week) with a stable dose for Ն2 months prior to the baseline visit.
Patients who failed to respond to previous anti-TNF treatment were excluded. Anti-TNF responders who later discontinued that drug due to loss of efficacy or other reasons were eligible, provided that previous biologic therapy was stopped Ն3 months before baseline, except for etanercept or anakinra (1 month). Concomitant treatment was allowed with analgesics, nonsteroidal antiinflammatory drugs/cyclooxygenase 2 inhibitors and corticosteroids (prednisone or equivalent, Յ10 mg/day). Corticosteroid doses could be reduced according to local guidelines; dose increases were not permitted. Exclusion criteria included diagnosis of any other inflammatory arthritis, secondary noninflammatory arthritis, history of chronic infections, serious infections, lymphoproliferative disorder, malignancy or demyelinating disease, history of or currently active tuberculosis (TB), a positive chest radiograph for TB or a positive purified protein derivative (PPD) skin test (Ն5 mm) or close contact with individuals with active TB. Patients positive for PPD could be included if active TB was ruled out and they were adequately treated for latent TB (e.g., isonicotine acid hydrazide therapy for 9 months [with vitamin B6]), with treatment initiated Ն1 month prior to study drug administration. Classical exclusion criteria for anti-TNF therapy were also applied. Study design. This was a phase IIIb multicenter study with an open-label run-in period, followed by a doubleblind, placebo-controlled randomized period ( Figure 1A). The study protocol was approved by an independent ethics committee or institutional review board at each center across the US, France, and Canada and carried out in accordance with the Declaration of Helsinki. During the open-label run-in phase, all patients received a CZP loading dose followed by 200 mg CZP every 2 weeks up to week 16 as add on to background MTX therapy. Patients were classified according to the American College of Rheumatology 20% improvement criteria (ACR20) (13) response at week 16; at week 18 ACR20 nonresponders were withdrawn and responders were randomized 1:1:1 to either 200 mg CZP every 2 weeks, 400 mg CZP every 4 weeks, or placebo during the double-blind phase, up to week 34. Unblinded staff prepared and administered study medication but had no other involvement in the study. US or Canadian patients who experienced disease flares between week 18 and 34 (defined as patients who had a swollen joint count and tender joint count equal to or worse than baseline) or who completed week 34 could enroll in an open-label safety study (NCT00753454). At the
Significance & Innovations
• The study design used here to investigate the efficacy of maintenance dose regimens has not been specifically tested previously in adult rheumatoid arthritis patients. It examines maintenance of response both in anti-tumor necrosis factor (anti-TNF)-naive patients and in anti-TNF secondary incomplete responders after an open label run-in phase. It also examines dose differences in those circumstances and compares results to placebo on methotrexate (MTX) background. The placebo group allows some understanding of duration of response after the initial open-label period. A similar design could be used to answer questions on dosing flexibility and duration of response on withdrawal for other drugs.
• This study showed that certolizumab pegol (CZP) both 200 mg every 2 weeks and 400 mg every 4 weeks dosing regimens are effective in maintaining a clinical and functional response in combination with MTX in patients with an incomplete response to MTX alone, once an initial response has been achieved.
• Specifically, this study also demonstrated that both maintenance doses of CZP are efficacious in patients who were anti-TNF naive and those who initially responded to previous anti-TNF treatment but who later discontinued due to loss of efficacy or other reasons. This result may allow patients to have more flexibility in maintenance dosing treatment. 152
Furst et al
start of the open-label safety study, to maintain the blinding, all patients received 400 mg CZP at weeks 0, 2, and 4, followed by 200 mg CZP every 2 weeks thereafter.
Efficacy and safety evaluations. The primary objective was clinical efficacy by ACR20 response criteria at the end of the double-blind phase (week 34). ACR20 responders at week 18 were assessed for maintenance of clinical response over an additional 16 weeks (week 34). Secondary efficacy end points were: 1) ACR20, ACR50, and ACR70 response rates at week 4,8,12,16,18, and 20, then every 4 weeks until week 34; 2) the Clinical Disease Activity Index (CDAI), the Simplified Disease Activity Index (SDAI), the Disease Activity Score in 28 joints using the ESR (DAS28-ESR) remission (defined as Յ2.8, Յ3.3, and Ͻ2.6, respectively), and change from baseline (week 0) in CDAI, SDAI, DAS28-ESR, and Health Assessment Questionnaire (HAQ) disability index (DI) at week 16 and 34; and 3) patient's assessment of arthritis pain and patient's global assessment of disease activity (both assessed on a 100-mm visual analog scale), fatigue (measured on a 10point fatigue assessment scale), and Short Form 36 (SF-36) domains and physical component (PCS) and mental component (MCS) summaries at week 34. Safety assessments, performed over the entire study period, included measurement of vital signs and laboratory parameters, recording of adverse events (AEs), serious AEs (SAEs), injection-site reactions, serious infections, and monitoring for signs or symptoms of TB.
Statistical analysis. Assuming a 50% response rate in the placebo group and 80% in the CZP-treated arms, 67 patients were needed per treatment arm to achieve Ն90% power to show a statistically significant difference in ACR20 response rate at week 34, using a 2-sided Fisher's exact test with a significance level of 0.025. The sample size was based upon the Bonferroni method, which conservatively sets the alpha level at 2.5% to account for 2 primary comparisons.
Baseline demographics and disease characteristics were summarized for the enrolled patients, defined as those who entered the run-in phase, as well as those entering the double-blind phase. Efficacy analyses on week 16 outcomes were conducted on patients who took Ն1 dose of study medication during the run-in phase. Efficacy analyses on the randomized population were carried out on the full analysis set, defined as the treated, randomized patients during the double-blind phase.
Safety analyses were performed on the enrolled set (all patients who entered the run-in phase) and on the safety set (all patients who were treated in the double-blind phase) for the overall study period (run-in phase, doubleblind phase, and open-label extension).
Analyses by prior anti-TNF therapy at baseline up to week 34 were post hoc, and statistical comparisons were not undertaken due to the exploratory nature of the analyses.
Missing data were imputed using nonresponder imputation for ACR responses and CDAI, SDAI, and DAS28-ESR remission rates, and last observation carried forward for other outcomes. ACR20, ACR50, and ACR70 responder rates and DAS28, CDAI, and SDAI remission rates were analyzed using a logistic regression model, including terms for treatment. Each comparison of active versus placebo arm was compared at the 2.5% level, and odds ratios (ORs) were estimated and presented with 97.5% confi-
Patients.
A total of 333 patients entered the run-in phase, of which 209 patients (62.8%) received CZP 200 mg every 2 weeks up to week 16 and then were randomized at week 18 to placebo plus MTX (n ϭ 69), CZP 200 mg every 2 weeks plus MTX (n ϭ 70), and CZP 400 mg every 4 weeks plus MTX (n ϭ 70) ( Figure 1B). One patient, randomized to the CZP 400 mg every 4 weeks group, was not treated in the double-blind phase. Of the 124 patients who withdrew from the study during the run-in phase, 94 either did not achieve ACR20 response or lost their initial response, 17 experienced AEs leading to drop out, 5 removed consent, 4 were lost to followup, and 4 dropped out for other reasons ( Figure 1B).
In total, 54 placebo patients (78.3%), 61 CZP 200 mg every 2 weeks patients (87.1%), and 63 CZP 400 mg every 4 weeks patients (90.0%) completed the double-blind phase. Overall, baseline characteristics were similar among the 3 double-blind treatment groups (Table 1). However, more patients who were randomized to the CZP treatment arms had prior anti-TNF exposure at baseline compared to placebo patients; patients with prior anti-TNF use at baseline had longer disease duration and lower ESR than those without (Table 1).
Treatment efficacy: week 34 (double-blind phase).
The ACR20 response at week 34 in the CZP 200 mg every 2 weeks group and the CZP 400 mg every 4 weeks group was significantly greater than in the placebo group (P ϭ 0.009 and P ϭ 0.017 by logistic regression, respectively) ( Figure 3A). Similarly, the ACR50 responses were significantly higher in both the CZP 200 mg every 2 weeks and 400 mg every 4 weeks groups than in the placebo group, with a comparable magnitude of change for both CZP dose regimens. For the ACR70 response, the CZP 400 mg every 4 weeks group was significantly better than the placebo group ( Figure 3A), while the CZP 200 mg every 2 weeks group was numerically greater but did not differentiate from placebo (P ϭ 0.005 and P ϭ 0.052 by logistic regression, respectively). Improvements in disease activity and physical function at week 34 were greater in patients receiving CZP in the double-blind phase compared to those randomized to switch to placebo ( Figure 3B and D). Both disease activity and physical function worsened from week 16 following CZP withdrawal ( Figure 3C and D).
The proportion of patients who met the minimum clinically important difference (MCID) for fatigue (1 unit) was numerically larger in both the CZP 200 mg every 2 weeks group (65.7%) and the CZP 400 mg every 4 weeks (58.0%) group than in the placebo group (46.4%). The same was true for pain, where 62.9% of the CZP 200 mg Treatment efficacy by prior anti-TNF. At week 16, following treatment with CZP 200 mg every 2 weeks, the ACR response was similar for patients with (n ϭ 178) versus without (n ϭ 155) prior anti-TNF exposure: ACR20: 60.7% versus 61.9%; ACR50: 34.8% versus 41.3%; and ACR70 14.0% versus 18.7%, respectively. Within the randomized set, ACR20, ACR50, and ACR70 responses at week 34 were comparable in both CZP treatment arms regardless of prior anti-TNF experience ( Figure 4A). Similar DAS28-ESR, SDAI, and CDAI remission rates were observed in both CZP-treated groups at week 34 ( Figure 4B). The proportion of patients with low disease activity was comparable. For patients in the placebo group, response and remission rates at week 34 were numerically lower in prior anti-TNF patients compared to anti-TNF-naive patients ( Figure 4A and B), although these rates were not tested statistically.
Overall, the change from baseline in DAS28-ESR to week 34 was similar between patients with and without prior anti-TNF exposure at baseline in the placebo (Ϫ1. Safety. Safety results are reported for all patients who received CZP in the study, and in the run-in, double-blind, (Table 2). Taken together, the most common AEs were infections and infestations, (occurring in 54.1% of patients) and the most common of those were upper respiratory tract infections. Injection and infusion site reactions occurred in 11 patients (3.3%) overall. SAEs were reported in 8.7% of patients, with the most frequent being infections and infestations (3.9%), musculoskeletal and connective tissue disorders (1.8%), and cardiac disorders (1.2%). There were no deaths and 1 case each of malignant melanoma and basal cell carcinoma. Standard exclusion criteria for TB in trials of biologic agents were applied, and there were no reported TB cases in any phase. During the double-blind phase, the rate of AEs for all randomized, treated patients (safety set) was comparable among the 3 treatment groups ( Table 2). The most common AEs in the placebo, CZP 200 mg every 2 weeks, and CZP 400 mg every 4 weeks groups were in the systems: infections and infestations, musculoskeletal and connective tissue disorders, gastrointestinal disorders and respiratory, and thoracic and mediastinal disorders. Among these, there were no visual differences, except for the respiratory, thoracic, and mediastinal disorders (e.g., cough) where the placebo group was subject to more AEs. There were no deaths, TB infections, or malignancies reported.
There were no SAEs reported in the placebo group during the double-blind phase as compared to 5 patients (7.1%) in the CZP 200 mg every 2 weeks group and 2 patients (2.9%) in the CZP 400 mg every 4 weeks group. The most common SAEs were infections and infestations, with 1 case each of oral candidiasis, herpes pharyngitis, pneumonia, and kidney infection in the CZP 200 mg every 2 weeks group. There were no serious infections in the CZP 400 mg every 4 weeks group. There were no instances of injection site pain and only 1 instance of a local injection site rash, reported by an investigator in the CZP 200 mg every 2 weeks group during the double-blind phase.
DISCUSSION
The DOSEFLEX study investigated the efficacy and safety of 2 dosing regimens of CZP (200 mg every 2 weeks and 400 mg every 4 weeks) in maintaining clinical response in active RA patients who had demonstrated an initial response to CZP.
The primary outcome, ACR20 response at week 34, was met by approximately two-thirds of patients in both CZP dosage groups, significantly more than the 45% in the group randomized to placebo following initial CZP treatment. These results were consistent across secondary end points, including the composite disease activity indices and measures of physical function. Interestingly, although DAS28 remission is often considered to be the least strin- gent of these measures (14), in this study similar numbers of patients achieved DAS28, SDAI, and CDAI remission. Both the maintenance dose of CZP 200 mg every 2 weeks and the increased dosing interval regimen of CZP 400 mg every 4 weeks demonstrated comparable and greater efficacy versus placebo. Provision of such dosing flexibility for CZP-treated RA patients in clinical practice would provide patients and physicians with increased choice and convenience. The utility of such variation in dosing has been shown for both infliximab (2,15,16) and adalimumab (17) in RA, with less frequent dosing schedules resulting in increased rates of compliance and adherence across a range of therapeutic areas (18 -20).
Post hoc analyses demonstrated that response to CZP during the run-in phase and the response for the 2 different dosing regimens were similar regardless of prior anti-TNF exposure at baseline. This supports emerging data on the efficacy of CZP in patients with prior anti-TNF exposure from clinical trials (21,22), observational studies (23,24), and registries (25). These studies have shown robust clinical responses to CZP, irrespective of previous anti-TNF therapy. Studies with other biologic agents have also shown efficacy in RA patients with prior anti-TNF experience (26 -29). In such anti-TNF exposed patients, clinical responses tend to decrease in patients who have received more previous anti-TNF therapy (30,31).
The DOSEFLEX study also enabled assessment of the impact of withdrawing therapy in patients who demonstrate an initial ACR20 response to CZP, although this was not a primary objective of the trial. European League Against Rheumatism recommendations suggest withdrawal of biologic disease-modifying antirheumatic drugs (DMARDs) should be considered only in patients with persistent stable remission once glucocorticoids have been tapered (32). The results from the DOSEFLEX study in CZP responders add to the limited evidence base from studies that have assessed withdrawal of biologic agents after prolonged clinical remission (33)(34)(35)(36), and provide some evi- CZP and randomized to the placebo group remained unchanged with respect to ACR20 response and 55.1% worsened, with mean change from baseline in DAS28-ESR score worsening between weeks 18 and 34. Of interest, patients with prior anti-TNF exposure had a greater increase in disease activity compared with those who were anti-TNF naive. Although sample sizes are small and a longer followup period would be needed, this suggests that the prior anti-TNF exposure patients are more refractory and it may therefore be more difficult to withdraw therapy. Further investigation of predictive factors allowing discontinuation of therapy is clearly appropriate. Studies have shown that, in general, it is possible to withdraw therapy in early RA MTX-naive patients (37), but this strategy has been shown to be less successful in patients who have failed to respond to DMARDS with longer disease duration (35,38). The AE profile of CZP in this study, including the openlabel extension, was consistent with previously reported studies (7,9,11,21) and also in line with other anti-TNF therapies; no new safety signals for CZP were identified (37,39).
A limitation of this study is that it was not designed to test the equivalence or inferiority of the 2 CZP maintenance doses. However, by directly comparing the data, for most efficacy parameters maintenance of response is similar regardless of dosing schedule. Further research is required to confirm that there is a similar radiographic response between the 2 maintenance doses. Additional limitations of this study are that analyses of stratification by prior anti-TNF exposure at baseline were post hoc and therefore no statistical tests could be conducted, particularly in view of the small sample size and the limited duration of followup. Further, patients with prior anti-TNF therapy at baseline had stopped their treatment due to a variety of reasons; nevertheless, primary anti-TNF treatment failure patients who did not respond to anti-TNF therapy were excluded, so this small subset could not be examined. Although efficacy data were not analyzed by the reason for therapy discontinuation, data from the REALISTIC study have shown that response rates were similar among CZP patients irrespective of whether they discontinued anti-TNF therapy due to reasons of safety or efficacy (21).
In conclusion, in RA patients on background MTX therapy who achieved an initial clinical response to 16 weeks of CZP treatment, the less frequent dosing regimen of CZP 400 mg every 4 weeks was comparable to the CZP 200 mg every 2 weeks maintenance dose, independent of prior anti-TNF use. This may allow patients to have flexibility in dosing between the 2 schedules, providing more convenient, less frequent dosing for some patients without impacting the clinical efficacy or safety of treatment. | 2016-11-05T07:37:12.269Z | 2015-01-27T00:00:00.000 | {
"year": 2015,
"sha1": "c837a0d465fa986ee75361e498e6918f12fc8ce3",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acr.22496",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c837a0d465fa986ee75361e498e6918f12fc8ce3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25529670 | pes2o/s2orc | v3-fos-license | A Variant of the Breast Cancer Type 2 Susceptibility Protein (BRC) Repeat Is Essential for the RECQL5 Helicase to Interact with RAD51 Recombinase for Genome Stabilization*
Background: The BRC repeat is essential for BRCA2 to bind RAD51 and promote homologous recombination. Results: A BRC repeat variant is essential for RECQL5 to bind RAD51 and suppress homologous recombination. Conclusion: The BRC repeat can be utilized to either promote or suppress homologous recombination. Significance: Discovery of multiple functions of the BRC repeat is important for understanding regulation of homologous recombination. The BRC repeat is a structural motif in the tumor suppressor BRCA2 (breast cancer type 2 susceptibility protein), which promotes homologous recombination (HR) by regulating RAD51 recombinase activity. To date, the BRC repeat has not been observed in other proteins, so that its role in HR is inferred only in the context of BRCA2. Here, we identified a BRC repeat variant, named BRCv, in the RECQL5 helicase, which possesses anti-recombinase activity in vitro and suppresses HR and promotes cellular resistance to camptothecin-induced replication stress in vivo. RECQL5-BRCv interacted with RAD51 through two conserved motifs similar to those in the BRCA2-BRC repeat. Mutations of either motif compromised functions of RECQL5, including association with RAD51, inhibition of RAD51-mediated D-loop formation, suppression of sister chromatid exchange, and resistance to camptothecin-induced replication stress. Potential BRCvs were also found in other HR regulatory proteins, including Srs2 and Sgs1, which possess anti-recombinase activities similar to that of RECQL5. A point mutation in the predicted Srs2-BRCv disrupted the ability of the protein to bind RAD51 and to inhibit D-loop formation. Thus, BRC is a common RAD51 interaction module that can be utilized by different proteins to either promote HR, as in the case of BRCA2, or to suppress HR, as in RECQL5.
The BRC repeat is a structural motif in the tumor suppressor BRCA2 (breast cancer type 2 susceptibility protein), which promotes homologous recombination (HR) by regulating RAD51 recombinase activity. To date, the BRC repeat has not been observed in other proteins, so that its role in HR is inferred only in the context of BRCA2. Here, we identified a BRC repeat variant, named BRCv, in the RECQL5 helicase, which possesses anti-recombinase activity in vitro and suppresses HR and promotes cellular resistance to camptothecin-induced replication stress in vivo. RECQL5-BRCv interacted with RAD51 through two conserved motifs similar to those in the BRCA2-BRC repeat. Mutations of either motif compromised functions of RECQL5, including association with RAD51, inhibition of RAD51-mediated D-loop formation, suppression of sister chromatid exchange, and resistance to camptothecin-induced replication stress. Potential BRCvs were also found in other HR regulatory proteins, including Srs2 and Sgs1, which possess anti-recombinase activities similar to that of RECQL5. A point mutation in the predicted Srs2-BRCv disrupted the ability of the protein to bind RAD51 and to inhibit D-loop formation. Thus, BRC is a common RAD51 interaction module that can be utilized by different proteins to either promote HR, as in the case of BRCA2, or to suppress HR, as in RECQL5.
Homologous recombination (HR) 3 is critical for the errorfree repair of chromosomal lesions, such as DNA double strand breaks, and also for the recovery of damaged replication forks (1,2). During meiosis, HR generates crossovers among homologous chromosomes to ensure their proper segregation at the first meiotic division (1,3). Thus, HR is indispensable for the maintenance of genome integrity and meiotic chromosome segregation (4). In fact, defects in HR can increase cancer and failed meiosis (4). On the other hand, untimely and inappropriate HR can cause gross chromosome rearrangements, including translocations, deletions, and inversions, with potentially mutagenic or oncogenic consequences (2,4). Therefore, HR is tightly controlled in cells by a variety of pro-and anti-recombinogenic regulatory mechanisms.
Human BRCA2 and its orthologs are an important positive regulatory element for HR (5)(6)(7). BRCA2 directly interacts with RAD51 recombinase and recruits it to double strand breaks. BRCA2 promotes the formation of the RAD51-ssDNA nucleoprotein filament that pairs with and invades a homologous DNA duplex to initiate homologous DNA repair. Moreover, BRCA2 modulates the DNA binding selectivity of RAD51 to stimulate strand exchange (8,9). The primary RAD51 interaction domain in BRCA2 and its orthologs consists of a varying number of the BRC repeat, a module of ϳ35 amino acid residues (10,11). The BRC repeat directly associates with RAD51 or RAD51-DNA filaments, regulates the DNA binding activity of RAD51, enhances the exchange of the ssDNA-binding factor replication protein A (RPA) by RAD51 on ssDNA, and promotes RAD51-dependent homologous DNA pairing (8,9,12,13). Structural and functional analyses have revealed that the BRC repeat harbors two motifs, referred to as motif 1 and motif 2, which are separated by a linker sequence (11,14). Motif 1 binds the oligomerization interface of RAD51 to regulate RAD51 filament assembly, whereas motif 2 binds a separate region of RAD51, and a functional BRC repeat requires the two motifs working in tandem pairs (14).
To date, the BRC repeat has not been discovered in any other proteins, but we report here its presence in RECQL5, where it also regulates HR. RECQL5 is one of five RecQ-like DNA helicases in mammalian cells (15,16). Genetic studies in mouse and chicken DT40 cells have implicated RECQL5 in the suppression of HR, including inhibition of sister chromatid exchange (SCE) (17,18), in the preservation of genome integrity upon genotoxic stress (19) and in cancer avoidance (20). RECQL5 directly interacts with RAD51 and inhibits RAD51-mediated homologous DNA pairing by dismantling the RAD51 presynaptic filament assembled on ssDNA (20,21). A previous study has mapped a RAD51-interacting region between residues 654 and 725 of RECQL5 and shown that several residues within this region are important for binding and regulation of RAD51 recombinase in vitro and for HR regulation in vivo (21). However, the region was not defined as a recognizable domain. Here, we show that the RAD51 interaction domain in RECQL5 consists of two motifs with a strong resemblance to the conserved motifs found in the BRCA2-BRC repeats. We have named this RECQL5 domain BRCv (BRC variant) because it also harbors notable differences from the BRC repeats in BRCA2. Importantly, we demonstrate that both motifs of BRCv are required for RECQL5 to interact with RAD51, to inhibit RAD51-mediated homologous DNA pairing, to suppress SCE, and to tolerate replication stress. Sequence-based comparisons reveal that potential BRCvs are present in other HR regulatory proteins, suggesting that the BRC repeat and its variants may be employed by many proteins in the regulation of HR.
Protein Purification-Wild-type and mutant forms of human RECQL5 were expressed in Escherichia coli and purified as described previously (20). Human RAD51 K133R, RPA, and HOP2-MND1 complex were purified according to previously published protocols (22,23). Wild type and mutant forms of Srs2 were expressed in E. coli and purified as described (24,25). Yeast Rad51 protein was expressed in yeast cells and purified as described (26).
Immunoprecipitation-RecQL5-associated complexes were isolated from total cell lysate of HEK-293 cells transiently expressing FLAG-tagged RECQL5 (FRECQL5). Cell pellets were suspended in three volumes of lysis buffer (50 mM Tris-HCl (pH 8.0), 150 mM NaCl, 5 mM MgCl 2 , and 0.5% Nonidet P-40) containing 1 mM PMSF, 1 mM DTT, and protease inhibitor mixture. FLAG immunoprecipitation (IP) was done according to the manufacturer's protocol (Sigma). In brief, cell lysates were diluted three times with lysis buffer and incubated with anti-FLAG M2-agarose affinity gel for at least 12 h at 4°C. The resin was washed three times for 10 min each with lysis buffer and then treated with the FLAG peptide to elute proteins. The eluate was subjected to immunoblotting.
Affinity Pull-down-His 9 -tagged wild type or mutant Srs2 was incubated with yeast Rad51 in 30 l of buffer (25 mM Tris-HCl, pH 7.5, 0.01% Igepal, 1 mM 2-mercaptoethanol, and 10 mM imidazole) containing 100 mM KCl for 30 min at 4°C and then mixed with 10 l of Ni-NTA-agarose (Qiagen) for 2 h at 4°C to capture the tagged Srs2 and Rad51. The resin was washed three times with 50 l of the same buffer and then treated with 20 l of 2% SDS to elute proteins. The supernatant, last wash, and SDS eluate (10 l each) were analyzed by SDS-PAGE.
ATPase Assay-The ATPase assay was carried out as described (20). Briefly, RECQL5 or one of the indicated RECQL5 mutants (25 nM) was incubated with X174 ssDNA (75 M nucleotides) and 1.5 mM [␥-32 P]ATP for the indicated times. For wild type and mutant Srs2, 15 nM proteins was used. Analysis was done by thin layer chromatography (20).
D-loop Assay-This was conducted at 37°C as described (20), by incubating the RAD51-K133R protein (1 M) with radioactively labeled 90-mer oligonucleotide D1 (3 M nucleotides) for 5 min, followed by the incorporation of RPA (135 nM) together with RECQL5 or the indicated RECQL5 mutant (25 nM) and a 5-min incubation. Then HOP2-MND1 (300 nM) and pBluescript replicative form I DNA (50 M base pairs) were added to complete the reaction. After 10 min of incubation, the reaction mixtures were deproteinized and analyzed, as described (20). The D-loop assay for wild type and mutant forms of Srs2 was conducted according to Ref. 24.
Complementation Analyses-Complementation analyses were done as described previously (27) with some modifications. In brief, FLAG-tagged RECQL5 constructs were generated by using a QuikChange multisite-directed mutagenesis kit and transfected into BLM Ϫ/Ϫ /RecQL5 Ϫ/Ϫ chicken DT40 cells by using nucleofection solution. 1 ϫ 10 6 cells in pellets were suspended into 100 l of nucleofection solution, and 2-5 g of DNA was then added to the suspended solutions. Due to the low transfection efficiency, each construct was digested with XhoI, and the linearized plasmids were used for transfection. Transfected solutions were incubated for 24 h in a 6-well plate with 2.0 ml of medium. Stable clones were isolated from the primary transfectant pools using limiting dilution into 96-well plates and selection for resistance to zeocin.
SCE Assay-The SCE assay followed the published protocol (27,28) with minor modifications. In brief, cell cultures were grown through two cell cycles in the presence of 10 M 5-bromodeoxyuridine (BrdU). Colcemid was added to a final concentration of 0.1 g/ml to accumulate mitotic cells 2 h prior to the harvesting of cells. Harvested cells were then treated with 75 mM KCl for 20 min at room temperature and then fixed with 3:1 (v/v) methanol-glacial acetic acid. The fixed cell suspension was dropped onto a glass slide and air-dried. The cells on the slides were incubated with 10 g of Hoechst 33258/ml in 50 mM phosphate buffer (pH 6.8) for 20 min and rinsed with MacIlvaine solution. The cells were exposed to UV for 60 min and then incubated in 2ϫ SSC at room temperature for 60 min. The cells were finally stained with 3% Giemsa solution for 20 -40 min and examined under a light microscope.
RECQL5 Interacts with RAD51 through a Conserved Region-
A RAD51-interacting region has been previously identified in RECQL5 by an in vitro assay using a series of RECQL5 fragments (21). We confirmed and extended the previous analysis in vivo by expressing a series of FLAG-tagged RECQL5 fragments ( Fig. 1A) in HEK-293 cells and performing immunoprecipitation-coupled Western blot (IP-Western) to identify the RAD51 interaction domain. This strategy has been previously used to identify the KIX and SRI domains as the Pol II interaction domains of RECQL5 (27). We found that the region (residues 621-900) between the KIX and SRI domains is both nec-FIGURE 1. RECQL5 interacts with RAD51 independently of the KIX and SRI domains. A, schematic diagram of the wild type and truncated forms of FLAG-RECQL5. The N-terminal helicase, RECQ C-terminal (RQC), KIX, and SRI domains are shown along with the newly identified BRC repeat. The RAD51 interaction data are summarized on the right. B, immunoblotting shows the associations between different RECQL5 mutants and RAD51. HEK-293 cells were transfected with expression plasmid of full-length and various RECQL5 deletion constructs. The mixtures of FLAG IP were analyzed by immunoblotting with antibodies to FLAG and RAD51. C, sequence alignment of RECQL5 from different species. The residues mutated are marked with asterisks. All of the residues were converted to alanine. D, IP-Western shows that different RECQL5 point mutants coimmunoprecipitated with different amounts of RAD51 but with comparable amounts of Pol IIo and Pol IIa. The FLAG-tagged RECQL5 and its different point mutants were transfected into HEK-293 cells. Co-immunoprecipitation was performed using the FLAG antibody. essary and sufficient for RAD51 association (Fig. 1, A and B, lane 8). Because RAD51 is highly conserved in evolution, it seemed likely that its binding residues in RECQL5 should also be well conserved. Therefore, we mutagenized several conserved residues within this region (Fig. 1C) and found that mutant B (F666A/Q667A) substantially reduced RAD51 association, whereas mutant A (K651A/K653A/R654A) and mutant C (K750A/Q752A) retained normal association (Fig. 1, C and D). Thus, the region containing Phe-666 and Gln-667 is required for RAD51 association, in agreement with the previous in vitro data indicating that Phe-666 and residues between positions 652 and 674 are required for RAD51 interaction (21). We noted that although mutant B has reduced association with RECQL5, it retained normal association with Pol II (Fig. 1D), indicating that RECQL5-RAD51 association is independent of RECQL5-Pol II interactions.
A BRC Repeat Variant Is Present in the RAD51 Interaction Domain of RECQL5-Sequence analysis revealed that the RAD51 interaction domain of RECQL5 contains two conserved motifs that are either identical or similar to those previously described in the BRC repeat of BRCA2 ( Fig. 2A) (14). For example, the consensus sequence of motif 1 in RECQL5, FX(T/S)A, is identical to the corresponding sequence of the BRC repeat (FXXA) (Fig. 2A). In addition, the motif 2 sequence of RECQL5, LLDE, is also similar to the consensus sequence of motif 2 of the BRC repeat (hhXa, where "h" represents a hydrophobic and "a" represents an acidic residue). Based on these observations, we have named the RECQL5 region that encompasses the two conserved motifs as BRCv (BRC repeat variant).
A Homology Model Predicts That BRCv Binds RAD51-like BRC-Structure predictions suggested that RECQL5-BRCv, like the BRCA2-BRC repeat, is largely devoid of regular secondary structure (31). A homology model for the RECQL5-BRCv repeat was generated in the Swissmodel server, using the RAD51-BRC4 structure (Protein Data Bank entry 1N0W) as the template (29). The model predicts that the conserved Phe and Ala side chains of the BRCv FXXA motif 1 may make contact with the hydrophobic pocket on the core catalytic domain of RAD51, part of the oligomerization interface, as seen in the RAD51-BRC4 structure (Fig. 2, A and C) (11). The LLDE motif 2 of BRCv is expected to bind a region of RAD51 distant from the oligomerization surface. The first two hydrophobic residues (Leu-700 and Leu-701) are predicted to occupy the same pocket occupied in RAD51 by Leu-1545 and Phe-1546 of BRCA2-BRC4, and the acidic residue in the fourth position (Glu-703) is expected to form a salt bridge with Arg-250 of RAD51, as seen for Glu-1548 of BRCA2-BRC4 (Fig. 2C) (11,14). We note that the sequence that intervenes between the two BRCv motifs, predicted by the model to form a helix and a loop, is twice the length (25 amino acids) of that of the BRCA2-BRC repeat (12 amino acids) and therefore unique to the RECQL5-BRCv.
Both Motifs of the BRCv Are Needed for RECQL5-RAD51 Association-A previous study has shown that both motifs of BRCA2-BRC4 are required for RAD51 binding and BRC function (14). We investigated whether the two motifs of BRCv are similarly required for RECQL5 to associate with RAD51 by structure-guided mutagenesis, using the homology model described above. In motif 1, we substituted F666E or A669E to disrupt hydrophobic interactions at the predicted BRCv-RAD51 interface (Fig. 2C). In motif 2, we substituted L700E or L701E to disrupt the predicted hydrophobic interactions and E703A to abolish the predicted salt bridge with RAD51-R250 ( Fig. 2C) (11,14). We found that each mutation substantially reduced the amount of RAD51 that co-immunoprecipitated with RECQL5 (Fig. 2D, lanes 2-5). The results support the inference that both motifs are integral parts of a BRC repeat variant and indispensable for RAD51 interaction.
We noted that RECQL5 mutants in which only one motif of BRCv is mutated retained partial association with RAD51 (about 10 -40%). We therefore generated several double mutants that simultaneously mutate both motifs and found that these mutants were more severely defective in RAD51 association (Fig. 2D, lanes 6 -9). The data suggest that BRCv can bind RAD51 through either motif and that only when both motifs are inactivated is RAD51 binding abolished.
A Conserved Residue outside Motif 1 of BRCv Contributes to RAD51 Association-In addition to the two motifs, conserved residues next to motif 1 in BRCA2-BRC4 have also been shown to contribute to RAD51 association (11,14). In particular, mutation of one of the hydrophobic residues, Val-1532, disrupts BRCA2-RAD51 association (32). We found that the corresponding residue of Val-1532 in BRCv is highly variable in RECQL5 from different species ( Fig. 2A), indicating that this residue in BRCv is dispensable for RAD51 association. In contrast, BRCv has two highly conserved hydrophobic residues next to motif 1, Leu-672 and Met-673. Substitution of Leu-672 with Ala substantially reduced RECQL5-RAD51 association (supplemental Fig. S1), consistent with the notion that residues outside of motif 1 also contribute to RAD51 association.
Motif 1 of BRCv Is More Important than Motif 2 for Suppression of D-loop Formation-RECQL5 has been shown to possess anti-recombinase activity, evidenced by its ability to inhibit RAD51-mediated D-loop formation (20). This activity was later shown to be decreased in two mutants localized in motif 1: F666A, which substituted the conserved Phe in the FX(T/S)A consensus sequence, and ⌬652-674, in which motif 1 was deleted (21). We studied whether one or both motifs of BRCv are required for the anti-recombinase activity of RECQL5, using the same D-loop assay (20). For this purpose, we expressed and purified several recombinant RECQL5 mutant proteins from E. coli that harbored mutations in either motif 1 (F666E) or 2 (L700E and E703E) or in both (F666E/L700E and F666E/E703A) (Fig. 3A). All three mutant proteins behaved like the wild type counterpart during purification and had wild-type levels of ATPase activity (Fig. 3B).
We found that whereas the wild type RECQL5 protein inhibited D-loop formation efficiently, the motif 1 mutant was deficient in this activity (Fig. 3, C-E). The result is in agreement with published data (21). In contrast, the results showed that the two motif 2 mutants are less affected in their D-loop inhibitory activity. As expected, the two variants with mutations in both motifs were more deficient in D-loop inhibitory activity (Fig. 3, C-E). These results indicate that although both motifs of the BRCv are required for optimal association between (DROME1 and DROME2). Notably, Drosophila RECQL5 has two potential BRCvs, whereas that from other species has only one. B, sequence alignment of putative BRCvs identified in four helicases (Srs2, Mph1, Sgs1, and Pif1) from different yeast species. The asterisks indicate the residues mutated to alanine. C, homology model of the RECQL5-BRC repeat. RAD51-BRC4 (BRCA2) (Protein Data Bank entry 1N0W) is shown overlaid with the homology model of RECQL5-BRCv. Cyan, RAD51; orange, BRCA2-BRC4; green, RECQL5-BRCv. Residues associated with motif 1 and motif 2 are shown in stick representations and labeled accordingly. D, IP-Western shows that RECQL5 with mutations in either motif 1 or 2 co-immunoprecipitated with reduced levels of RAD51, whereas that with mutations in both motifs almost completely lost RAD51 association. The numbers below lanes 1-5 indicate relative levels of RAD51 that co-immunoprecipitated with an indicated RECQL5 mutant, with the wild type protein being set as 1.0. RECQL5 and RAD51, motif 1 is more essential for anti-recombinase function, with motif 2 seeming to play only a minor role.
Both Motifs of BRCv Are Required by RECQL5 to Suppress SCE-Using a trans-dominant negative assay in human HEK-293 cells, a small difference (about 15-20%) has been previously observed between a RECQL5 motif 1 mutant (F666A) and wildtype protein in the down-regulation of HR-dependent repair of double strand breaks (21). One caveat of this assay is that the exogenous protein needs to be overexpressed to a level significantly higher than that of the endogenous protein. To circumvent this requirement, we utilized RECQL5-knock-out chicken DT40 cells (18,27) to investigate whether the conserved motifs of BRCv are needed for RECQL5 to promote genome stabilization in vivo. Because RECQL5 and BLM are functionally redundant in DT40 cells (RECQL5 Ϫ/Ϫ single mutant cells lack obvious genome instability phenotypes), we studied functions of RECQL5 in RECQL5 Ϫ/Ϫ /BLM Ϫ/Ϫ double mutant cells, which exhibit a higher SCE frequency and CPT sensitivity compared with BLM Ϫ/Ϫ single mutant cells (18,27). We have previously shown that reintroduction of wild type RECQL5 protein into RECQL5 Ϫ/Ϫ /BLM Ϫ/Ϫ cells restored the SCE frequency and CPT sensitivity to those of the BLM Ϫ/Ϫ cells, whereas reintroduction of RECQL5 mutants deficient in association with RNA polymerase II failed to fully rescue (27). Here we used the same RECQL5 Ϫ/Ϫ / BLM Ϫ/Ϫ mutant cells to study whether various RECQL5-BRCv mutants (Fig. 4A) can correct these phenotypes.
All mutants were expressed at levels comparable with that of the wild-type protein (supplemental Fig. S2). In addition, all four variants with single-motif mutations in BRCv associated with RAD51 at reduced levels by IP-Western, and those with mutations in both motifs associated with little or no RAD51 (supplemental Fig. S2, lanes 1-8). These results are in agreement with the finding that both motifs of BRCv are required for normal association with RAD51 in human cells (Fig. 2D, lanes 6 -9).
We noticed that the levels of RAD51 that associate with three single mutants (F666E, A669E, and L700E) were somewhat higher than those observed in human cells (compare Supplemental Fig. S2, lanes 2-4, with Fig. 2D, lanes 2-4). This could be due to the use of human proteins in chicken cells, so that the observed RECQL5-RAD51 association could be somewhat different compared with that in human cells.
As reported before (18,27), the SCE level of RecQL5 Ϫ/Ϫ / BLM Ϫ/Ϫ cells was found to be significantly higher than that of BLM Ϫ/Ϫ cells (28.8 versus 18.8), and this enhanced level of SCE was completely suppressed by reintroduction of wild type RECQL5 protein (Fig. 4, A and B). In this assay, lower SCE suppression efficiency correlates with a stronger protein defect, with the suppression efficiency of the wild type protein being set as 100%. Reintroduction of either motif 1 or 2 mutants partially suppressed the enhanced SCE levels of RecQL5 Ϫ/Ϫ / BLM Ϫ/Ϫ cells. Suppression efficiency ranged from 34 -47% for the two motif 1 mutants (F666E and A669E) to 67-81% for the two motif 2 mutants (L700E and E703A). Transfection of the variants with mutations in both motifs (F666E/L700E, A669E/ L700E, and F666E/E703A) resulted in SCE suppression efficiency that was lower than that seen with single mutants; the level of SCE in cells that expressed these compound mutants was statistically indistinguishable from that of RecQL5 Ϫ/Ϫ / BLM Ϫ/Ϫ cells (Fig. 4, A and B). These results are in accord with the RAD51 binding data in Fig. 2, indicating that the RECQL5-RAD51 interactions mediated by both motifs of BRCv are required for optimal suppression of SCE. The observations that the two motif 1 mutants are more deficient in SCE suppression than the two motif 2 mutants correlate well with the D-loop inhibition data (Fig. 3) and suggest that motif 1 is more impor- tant than motif 2 for RECQL5 to regulate RAD51 activity during HR in vivo.
Both Motifs of BRCv Are Required by RECQL5 to Promote Cellular Resistance to CPT-induced Replication Stress-We studied whether RECQL5 mutants carrying mutations in different motifs of BRCv are defective in promoting resistance to CPT-induced replication stress. Consistent with our previous findings, introduction of human RECQL5 into RecQL5 Ϫ/Ϫ / BLM Ϫ/Ϫ DT40 cells largely corrected the hypersensitivity of these cells to CPT (Fig. 4C) (27). Importantly, RECQL5 variants carrying mutations in either motif 1 (F666E and A669E) or motif 2 (L700E and E703A) were only partially active in rescuing the hypersensitivity to CPT, whereas variants carrying combined mutations in both motifs (F666E/E703A, F666E/L700E, A669E/L700E) were more deficient than single motif mutants in this assay (Fig. 4, C and D, and supplemental Fig. S3). These results correlate with the degree of RAD51 binding (Fig. 2D) by revealing that RECQL5 variants with single-motif mutations bound reduced amounts of RAD51 and partially supported CPT resistance, whereas variants with mutations in both motifs were more deficient in RAD51 binding and thus more defective in supporting CPT resistance. The results are also consistent with the SCE data and again suggest that both motifs of BRCv are required for the normal function of RECQL5 during genome stabilization.
BRCv Is Needed in Genome Maintenance Pathways Dependent on the Helicase Activity or the KIX Domain of RECQL5-We have previously shown that RECQL5 with mutations in either the helicase domain (K58R, which inactivates the helicase activity) or the KIX domain (E584D, which disrupts association with Pol IIa) were partially deficient in correcting the higher SCE level and CPT hypersensitivity of RecQL5 Ϫ/Ϫ /BLM Ϫ/Ϫ cells, whereas RECQL5 with mutations in both domains was completely deficient, suggesting that these two domains are needed for the integrity of different genome maintenance pathways (27). We investigated whether the helicase-and KIX-dependent pathways require BRCv. For this purpose, we generated a mutant that compromises the BRCv and helicase function (K58R/F666E/E703A) and another that inactivates both the BRCv and KIX domain (E584D/F666E/E703A). IP-Western confirmed that these two mutants were as deficient in RAD51 association as the BRCv mutant (F666E/E703A). Importantly, neither the helicase nor the KIX domain mutation affected RAD51 binding (supplemental Fig. S2, lanes 9, 10, and 12). We found that the SCE level and CPT sensitivity of Rec-QL5 Ϫ/Ϫ /BLM Ϫ/Ϫ cells expressing the BRCv-helicase double mutant were higher (i.e. more functionally impaired) than those cells complemented by the helicase single mutant but were comparable with those cells complemented by the BRCv mutant (Fig. 5, A and B; also see Fig. 4A). Similarly, the SCE level and CPT sensitivity of cells expressing the BRCv-KIX double mutant were higher than those of cells complemented by the KIX single mutant but were comparable with those of cells complemented by the BRCv mutant (Fig. 5, A and C; also see Fig. 4A). These results suggest that both the helicase-and KIX domain-dependent genome maintenance pathways require BRCv to function.
Potential BRCvs Are Present in Other HR Regulatory Proteins-One implication of our studies is that BRCv may be utilized by proteins other than RECQL5 to regulate HR. When we searched protein databases with consensus sequences of the two BRCv motifs, we identified potential BRCv motifs in several HR regulatory proteins, including yeast helicases Sgs1, Srs2, Mph1, and Pif1 (see Fig. 2B). Interestingly, the first three helicases all have anti-recombinase activities similar to that of RECQL5, including disruption of RAD51-made D-loops and suppression of crossover recombination (20,24,25,33,34). The data imply that the predicted BRCv motifs in these helicases may play a similar role in modulating the interaction with Rad51 as does the BRCv motif of RECQL5.
A Predicted BRCv Motif in Srs2 Is Critical for RAD51 Association and Suppression of D-loop Formation-One of the putative BRCv motifs in Srs2 (see Fig. 2B) is located within the Rad51-binding region that has been previously mapped (residues 783-1038) (35). To study whether this BRCv is important for Srs2 to bind Rad51 and suppress HR, we mutated two residues that are highly conserved in RECQL5-BRCv motifs and critical for RECQL5 to bind RAD51: one within the predicted motif 1 (F837A) and the other just outside of the motif 1 (L844A) (see Fig. 2B). The Srs2 wild type and mutant proteins were expressed in E. coli, purified to near homogeneity (Fig. 6A), and analyzed for RAD51 association as described above for RECQL5. Both mutants had ATPase activity similar to that of the wild type protein (Fig. 6B), indicating that the proteins are folded properly. Importantly, the mutant L844A was strongly deficient in both Rad51 interaction (Fig. 6C) and inhibition of Rad51-mediated D-loop formation (Fig. 6, D and E), whereas the mutant F837A was largely normal in Rad51 association ( Fig. 6C) but was modestly defective in suppression of D-loop formation at low protein concentrations (the suppression was about 70% for the wild type and 58% for the F837A mutant) (Fig. 6, D and E; compare lanes 5 and 3). Overall, these data support our prediction that a BRCv is present in Srs2, and this motif is critical for Srs2 to bind Rad51 and inhibit D-loop formation.
A BRC Repeat Variant in RECQL5 and Srs2 Mediates RAD51
Association-Although the BRC repeat had not been found in proteins other than BRCA2, we demonstrated that an HR regulatory protein, RECQL5, possesses a BRC repeat variant (BRCv) that resembles the BRCA2-BRC repeat in terms of structure and interaction motifs. Notably, both BRCv and BRC comprise two conserved motifs separated by a linker sequence, and the consensus sequence is identical for motif 1 and very similar for motif 2. Moreover, the two motifs in BRCv, as has been documented for BRC (14), are required for optimal RAD51 association; mutations in either motif reduce the ability of RECQL5 to bind RAD51, to inhibit RAD51-mediated D-loop formation, to suppress SCE, and to resist CPT-induced replication stress. Notably, using BRCv consensus sequences to search protein databases, we identified potential BRCv motifs in other HR regulatory proteins (Fig. 2B). We showed that a point mutant (L844A) in the predicted BRCv of Srs2 is strongly deficient in both Rad51 association and D-loop inhibition (Fig. 6). Our findings thus expand the role of the BRC repeat beyond BRCA2 and its orthologs. This strengthens the inference that the BRCv-mediated RAD51 association is important for HR regulation in general. Although the data presented herein indicate that the BRCv regulates interaction with Rad51 to mediate the anti-recombinase function of helicases, it remains an intriguing possibility that this motif may also promote Rad51 interaction in a prorecombination role under other circumstances.
We note that RECQL5 and several other vertebrate helicases (FBH1, RTEL1, and PARI) have been proposed to be functionally equivalent to yeast Srs2 (20, 36 -38). Our finding that Srs2 contains a functional BRCv favors RECQL5 as the Srs2 equivalent in humans.
BRCv and BRC Have both Common and Unique Features-The structural and functional similarities between BRCv and BRC suggest that the two domains may engage the same epitope on RAD51. For example, motif 1 of BRCv is predicted to bind the same RAD51 oligomerization interface ( 86 FXXA 89 ) as does motif 1 of BRC, which is expected to interfere with a crucial contact between RAD51 monomers to prevent RAD51 filament formation (11). On the other hand, motif 2 in BRCv is expected to interact with a pocket on RAD51 that is distant from the oligomerization surface (11,14). In support of this model, RECQL5 motif 1 mutants are deficient in inhibiting RAD51-mediated D-loop formation, whereas RECQL5 motif 2 mutants have reduced, albeit still significant, inhibitory activity. Moreover, motif 1 mutants of RECQL5 are more defective than motif 2 mutants in SCE suppression. These data suggest that interaction mediated by motif 1 at the RAD51 oligomerization interface is more important in regulating RAD51 activity than interaction by motif 2 at the distant pocket. This may help to rationalize the observation that motif 1 is strictly conserved in RECQL5 orthologs through evolution, whereas motif 2 is not ( Fig. 2A).
BRCv and BRC nevertheless differ from one another in several ways. First, the residues near motif 1 are different ( Fig. 2A) between the two. In the BRCA2-BRC repeats, hydrophobic residues scattered in the linker between motif 1 and motif 2 are required for the interaction between the linker and RAD51 as the repeat wraps around RAD51 (11,14,32). This hydrophobic patch is not found in RECQL5-BRCv. Likewise, a pair of hydrophobic residues conserved in the RECQL5-BRCv immediately adjacent to the FXXA motif 1 are not found in the BRCA2-BRC repeats. One of these residues (Leu-672) is also conserved in Srs2-BRCv (Leu-844), and mutation of this residue in either RECQL5 or Srs2 substantially reduces RAD51 association. Thus, interactions with RAD51 outside the two primary motifs appear to be different for BRCv versus BRC. Second, the linker sequences between the two motifs are different; the length in BRC is constant (12 residues), whereas that in BRCv is highly variable and also longer (18 -25 residues) ( Fig. 2A). The extra residues in BRCv are of unknown significance. Third, whereas BRC exists in multiple copies in human BRCA2, BRCv is present as a single copy in RECQL5 from most species. It was shown that BRC repeats within BRCA2 can be classified into two groups that have complementary functions and can work synergistically to activate HR (39). Speculatively, a single copy of BRCv may be sufficient for HR suppression, whereas cooperative interactions by multiple BRC repeats could be required for optimal activation of HR. Interestingly, Drosophila RECQL5 contains two putative BRCv repeats ( Fig. 2A), hinting that BRCv may also function in tandem as seen in BRCA2-BRC.
We noted that the motif 1 mutant of Srs2-BRCv is only mildly defective in D-loop inhibition and is indistinguishable in Rad51 association compared with the wild type protein (Fig. 6). This differs from the motif 1 mutant of RECQL5 that is strongly defective in both assays (Fig. 2) (21). The data imply that Srs2 may be less dependent on motif 1-mediated Rad51 interactions compared with RECQL5.
Two Pathways Mediated by Helicase and KIX Domains Converge to Regulate BRCv-mediated Recombination-Previously, several functional domains were identified in RECQL5: the helicase domain that is involved in DNA translocation and the KIX and SRI domains that mediate the association with different forms of RNA Pol II (27,40). The findings here add BRCv as a new distinct functional domain that enables RECQL5 to interact with RAD51. How do these different domains and their associated activities cooperate to accomplish RECQL5 functions? Our previously reported double mutant analyses have shown that the helicase and KIX domains act in independent pathways (27), whereas here we demonstrated that the BRCv is needed for the functional integrity of both pathways that are dependent on either the helicase or KIX domain. We suggest that BRCv-mediated RAD51 association may represent a common denominator of the two pathways that suppress SCE and confer resistance to CPT-induced replication stress (Fig. 5D). Although it seems likely that the helicase activity allows RECQL5 to translocate on DNA and dissociate RAD51 monomers from DNA in a processive fashion, it remains unclear how KIX affects BRCv-dependent RAD51 regulation. One possibility is that when the helicase/DNA translocase activity of RECQL5 is inactivated, the KIX-associated Pol II may recruit a backup helicase to power the translocation of RECQL5 on DNA in order to effect RAD51 removal.
In summary, our results suggest that the BRC repeat could be a general RAD51 regulatory module that can be used to either promote HR, as in the case of BRCA2, or repress HR, as in the case of RECQL5. | 2018-04-03T02:04:38.249Z | 2012-05-29T00:00:00.000 | {
"year": 2012,
"sha1": "61a524375b71876d6c6f0c82fccbce1e4c10abbd",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/287/28/23808.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "2fc670c44ecae5e65d9c79a1b9e20d1e96320ea9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245872922 | pes2o/s2orc | v3-fos-license | Research Note: Quality parameters of turkey hens breast fillets detected in processing plant with deep pectoral myopathy and white striping anomaly
The increase in the consumption of poultry meat intensified production, which allowed the emergence of myopathies associated with broiler and turkey meat. The aim to examine possible quality alterations in the 240 Pectoralis major muscle (breast fillets) from carcasses of turkey breeder hens. Regarding DPM, 120 samples of breast fillets from turkey of the Nicholas strain with Pectoralis minor muscle together were selected according to the occurrence of the myopathy in the Pectoralis minor muscle (tender), as follows: DPM score 2 (n = 40), DPM score 3 (n = 40), and a control group unaffected by DPM, score 0 (n = 40). Then, different 120 samples, from the same flock of birds, were selected according to White Striping (WS) anomaly in the Pectoralis major muscle (breast fillets), considering the degree of severity of the striations apparent in the muscle, as follows: moderate (n = 40), severe (n = 40) and a control group (normal) without the presence of WS anomaly (n = 40), with set up as a completely randomized design with 3 treatments for DPM and WS. We evaluated in meat of turkey breeder hens color, water-holding capacity (WHC), cooking loss (CL), shear force (SF), sarcomere length (SL) and total, soluble and insoluble collagen contents. The color parameters lightness (L*), redness (a*), and yellowness (b*) of turkey breeder hens breast fillets were altered by the occurrence of DPM and WS and as except CL, there were a difference for WHC and SF (P < 0.05). Significant differences were observed for sarcomere length (P < 0.05) between fillets without myopathies and with DPM Score 2 and 3 too. Higher values of total collagen (%) were observed for the most severe category of involvement for both myopathies. The DPM and WS affect the color and in a partial reduction texture of the breast fillets meat of turkey breeder hens and this may have a negative economic impact on the meat industry, because these are the main points evaluated by the consumer, in the most value commercial cut.
INTRODUCTION
The last decades have witnessed an increase of consumer preference for poultry meat over other types of muscle foods. This increase in the consumption of poultry meat intensified production, which allowed the emergence of myopathies associated with broiler and turkey meat. These myopathies include Deep Pectoral Myopathy (DPM), White Striping (WS), and Wooden Breast (WB), among others. The most studied myopathies and anomalies in broiler meat are White Striping and Wooden Breast and they are even less studied in turkeys. Therefore, further studies on their possible effects to the quality of the turkey meat are still required, as well as additional studies of how deep pectoral myopathy, which occurs in the Pectoral minor muscles, could affect the quality of attached breasts fillets.
Although the mechanism by which high growth rates in modern broilers trigger myopathies is not yet fully known, it is already clear that heavier birds have a higher incidence of muscular diseases (Lorenzi et al., 2014), making them an important objective of research. However, new obstacles to the industry, such as the appearance of DPM and WS, in different degrees of severity, have increased the need for studies on the physical, chemical, and histological changes that genetic 1 progress can introduce to hens and matrices (Petracci and Cavani, 2011).
DPM is characterized by muscle degeneration, which causes necrosis and atrophy, especially in the Pectoralis minor or supracoracoideus muscle (tender). Its lesions can affect both portions of the Pectoral minor muscles, being uni-or bilateral, and vary in color, evolving from a pinkish, blood-like appearance to a grayish-green discoloration (Bilgili and Hess, 2008). The occurrence of DPM depends on factors such as rearing conditions, age, weight, sex, and genetic strain (Kijowski et al., 2014) and it is accentuated in commercial turkeys, due to the lack of exercise of their pectoral muscles, due to the inactivity of the birds on the farms.
The WS anomaly is characterized by the occurrence of white striations viewed parallel to the muscle fibers, especially in the ventral surface (skin side) of the Pectoralis major muscle (breast fillet), and may present varying degrees of severity, being classified as normal (NORM), moderate (MOD), and severe (SEV). This anomaly is directly associated with heavier and/or higher growth rate poultry (Kuttappan et al., 2012).
Appearance and texture are the two most important quality attributes for poultry meat. Poultry meat color is a critical food quality attribute and is important for both type of consumers on selection of a raw meat product in the marketplace. After the purchase, the most important point is the meat texture that will be perceived on consumption time and could affect the product final quality assessment. These two attributes will be decisive for the consumer, as one (color) directly impacts the choice, purchase decision and the other (texture) at customer loyalty.
Thus, the present study proposes to examine the quality parameters of color and texture of the Pectoralis major muscle (breast fillets) from carcasses of turkey breeder hens affected by DPM in Pectoralis minor (tender) muscle in their different degrees (score 2, score 3 and control group unaffected or score 0) and WS in Pectoralis major muscle in their different degrees (moderate, severe, and control group unaffected).
Sample Collection
All samples were selected at 3-h postmortem from a commercial turkey slaughter plant in the south region of Brazil following the procedures adopted by the processing plant. The turkeys were slaughtered according to the standardized industrial practice consisting of electrical stunning, bleeding, scalding, plucking, evisceration, chilling, and deboning. Samples were harvested from turkey breeder hens of the Nicholas strain at disposal age (450 d), at an average weight of 13.0 kg. For the classification step, breast meat samples, Pectoralis major muscle with Pectoralis minor muscle together were selected at random on the slaughter line according to the occurrence of the myopathy in the Pectoralis minor muscle (tender), DPM score 2 (n = 40), DPM score 3 (n = 40), and a control group unaffected by DPM, score 0 (n = 40). The samples were classified as to the degree of severity of DPM in the Pectoralis minor muscle before the process of separating the two breast muscles and removed that only the affected part of the carcass should be discarded.
In accordance with the methodology adopted by Bilgili and Hess (2008), samples exhibiting well-defined lesions on the Pectoralis minor (tender) muscle were classified as DPM score 2; some of these lesions were surrounded by a clear hemorrhagic ring. Those which showed progressive degeneration of the Pectoralis minor muscle, with the damaged muscle tissue having a greenish appearance, were classified as DPM score 3 ( Figure 1). After this classification step, the Pectoralis minor muscle of each was discarded and samples of the remaining Pectoralis major muscle (breast fillets) were sent for quality analyses.
Then, 120 samples, from the same flock of birds, were selected according to WS anomaly in the Pectoralis major muscle (breast fillets) for the macroscopic classification, considering the degree of severity of the striations apparent in the muscle, as follows: moderate (n = 40), severe (n = 40), and a control group (normal) without the presence of WS anomaly (n = 40), according to the methodology used by Kuttappan et al. (2012) ( Figure 2).
The MOD degree classification was given to fillets that exhibited white striations with a thickness inferior to 1 mm, but visible on the surface of the muscle. The fillets showing white striations, parallel to the muscle fibers, with a thickness greater than 1 mm, easily visible on the surface of the breast fillet, were classified as SEV. The fillets that did not show white striations were classified as NORM.
Meat texture and color analyzes were performed immediately after the collection and classification of the DPM myopathy and the WS anomaly in the Pectoralis major muscle (breast fillet) after the separation and disposal of the Pectoralis minor muscle (tender).
Laboratory Analyses
Color was determined in the ventral (skin side) and dorsal (bone side, in contact with the Pectoralis minor muscle) surfaces of the Pectoralis major muscle, both at three points and the three averages of each were used. Meat color was determined using a colorimeter (Minolta Chrome Meter CR-400, Konica Minolta Sensing, Inc., Osaka, Japan), which employs the CIELAB system [lightness (L*), red intensity (a*), and yellow intensity (b*)].
Water-holding capacity (WHC) was determined with 2-g sample of the Pectoralis major muscle (breast fillets) and Cooking loss (CL) was determined from breast fillets were weighed, packed and cooked in a water-bath at 85°C for 30 min. Samples were weighed, packed and cooked in a water-bath at 85°C for 30 min. After cooling at room temperature, samples were from the CL analysis, with area equal to 1 cm 2 , were used to determine shear force (SF) to a Warner-Bratzler device coupled to a Texture Analyzer (TA-XT2i, Stable Micro Systems, LTD., Godalming, UK).
Sarcomere length was determined by phase-contrast microscopy with 0.5-g sample was homogenized with 30 mL of a 50:50 KCl (0.08 mol/L) and KI (0.08 mol/L) solution. Total, soluble and insoluble collagen contents according to the methodology proposed adapted by Carvalho et al. (2021), were quantified by determination of the amino acid hydroxyproline with 5 g of frozen raw turkey breast fillet was weighed into 50 mL falcon tubes and 20-mm distilled water was added.
Statistical Analysis
The experiment was set up as a completely randomized design with 3 treatments (control group -unaffected by DPM − score 0; and degrees of severity -DPM scores 2 and 3) of 40 samples each for DPM, 120 total samples. And the same for WS anomaly, completely randomized design with 3 treatments (control group − unaffected by WS-NORM, MOD, and SEV) of 40 samples, 120 total samples. Data were analyzed using the One-Way ANOVA procedure of SAS (2002SAS ( −2003 software (Statistical Analysis System; SAS Institute Inc, Cary, NC). Results were subjected to analysis of variance, and, in case of significance, means were compared by Tukey's test with significance defined as P < 0.05.
The color parameters of breasts fillets were altered by the occurrence of DPM and WS, with the change even detectable in MOD/score 2 cases, which did not differ (P > 0.05) from SEV/score 3 to Lightness (L*), redness (a*) and yellowness (b*) of the ventral and dorsal surfaces. Turkey breast fillets affected by myopathy/anomaly (DPM and WS) showed higher Lightness (L*), redness (a*), and yellowness (b*) when compared to breast fillets absent of any myopathy for the ventral and dorsal surfaces.
Soglia et al. (2018), when studying the effect of WS on turkey breast meat quality, reported that the quality traits and technological properties of the WS muscles were comparable to those of the unaffected samples. Therefore, as the occurrence of WS only marginally affected the quality traits of turkey meat, it seems reasonable to hypothesize a specie-specific physiological response towards the profound changes in muscle development resulting from genetic selection. These authors found no difference (P > 0.05) for Lightness (L*), redness (a*), yellowness (b*), CL, and SF, the same did not happen in this study, where there was no difference only for CL (P > 0.05) of all the quality characteristics of the evaluated to breast fillets of turkey breeder hens. Petracci et al. (2013), found that there were no differences in the L* values of broiler meat with different degrees of WS, but moderate and severe samples showed a significant increase in a* and b*. In the present study, L*, a*, and b* were higher in samples with myopathies, both DPM and WS in ventral and dorsal surfaces of turkey breeder hens' breast fillets. Petracci et al. (2013), still reported that the magnitude of WS found in their study, demonstrated that this abnormality is becoming an important quality issue for the poultry industry. Fillets showing severe WS may be downgraded in commercial plants and not marketed for fresh retailing. This could cause economic damage to the poultry industry.
No differences (P > 0.05) in CL were observed for all treatments. Similarly, Carvalho et al. (2021), reported that the percentage of CL did not differ significantly between turkey breast meat affected by NORM and SEV degrees of WS and Cavalcanti et al. (2021), also found no significant difference for this variable studying DPM in turkeys. Tijare et al. (2016), in study of birds processed at 6 wk of age, showed similar results to those reported no difference (P > 0.05) between breast broilers fillets with SEV WS and those with NORM WS.
Contrarily, both WHC and SF (P < 0.05) of the turkey breeder hens' breast fillets were affected by the DPM and WS. Higher WHC values for meats without myopathies than for both degrees of severity were observed (score 0 / NORM > score 2 / MOD = score 3 / SEV), being such difference significant for both DPM (P < 0.0041) and WS (P < 0.0038). This change in WHC in turkey breeder hens' breast fillets is not beneficial in the processing industry and for the consumer market, as determines the loss of water during cooking, processing, transport and storage.
Considering the SF, only samples from the most severe degree of both myopathies, score 3 and SEV, were different, showing significantly lower values (P < 0.0163 and P < 0.0052 for DPM and WS, respectively). This shows a possible protein breakdown in the breast fillets affected by myopathies, since a greater WHC is an indication of intact and more soluble proteins, with high functionality and meat with greater protein functionality, tend to produce products with superior quality.
Higher WHC values of meat can lead to greater muscle fiber turgor, which provides greater texture firmness, which is why meats without myopathies showed higher values for SF. The same behavior between the degrees of severity of myopathy for WHC and SF were observed by Carvalho et al. (2021), about WS and, for SF, in Cavalcanti et al. (2021), studying DPM.
Significant differences were observed for sarcomere length (P > 0.05) between fillets without myopathies and with DPM scores 2 and 3. Turkey breast fillets (Pectoralis major) from carcasses with Pectoralis minor muscles affected by DPM showed higher values for SL then score 0 (no DPM). A possible explanation for this result is that the actomyosin complexes of the myofibrils were dissociated, which resulted in an extension of sarcomere length. Although in the WS anomaly, no significant differences were observed for sarcomere length (SL). These results are in accordance to Carvalho et al. (2021), who reported similar results in the Pectoralis major muscles of turkeys.
Higher values of total collagen (%) were observed for the most severe category for both myopathies and for DPM, these higher values were in breast fillet from myopathy. Soluble collagen values (%) were significantly higher for DPM score 3 chicken breast fillets, however, with WS meat there was no difference (P < 0.05). Insoluble collagen (%) showed the same behavior for meat from DPM and WS, turkey breast meat with myopathies had higher values of Insoluble collagen (%) than meat absent from myopathy. The relative insolubility of collagen is due to its high tensile strength that forms intermolecular cross-bridges, influencing the tenderness of the meat. Interestingly, this association between genetics, weight, slaughter age, and the occurrence of muscular abnormalities demonstrated in turkeys. All these factors need to be studied and is the difference between the various studies currently.
In conclusion, the DPM and WS affect the color and cause a reduction in the water holding capacity of the breast fillets meat from turkey breeder hens. This may have a negative economic impact on the turkey's processing meat industry, because these are the main points evaluated by the consumer (color and texture), and meat juiciness determined by WHC contributes to eating quality as well as playing a role in texture. And this affects the quality of the muscle that belongs to the most valuable commercial cut. It is interesting to further research using these turkey meats affected by myopathies for processing in the industry, as an alternative for taking advantage and a way to reduce the impacts of losses, waste and economic damage. | 2022-01-12T16:19:38.223Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "61ec71416e0a8135c523a1c014efe9329f90ad16",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.psj.2022.101709",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a90e9acf9b6b0110a54c9ec1ced247d000a6bc72",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258441036 | pes2o/s2orc | v3-fos-license | Circulating extracellular particles from severe COVID-19 patients show altered profiling and innate lymphoid cell-modulating ability
Introduction Extracellular vesicles (EVs) and particles (EPs) represent reliable biomarkers for disease detection. Their role in the inflammatory microenvironment of severe COVID-19 patients is not well determined. Here, we characterized the immunophenotype, the lipidomic cargo and the functional activity of circulating EPs from severe COVID-19 patients (Co-19-EPs) and healthy controls (HC-EPs) correlating the data with the clinical parameters including the partial pressure of oxygen to fraction of inspired oxygen ratio (PaO2/FiO2) and the sequential organ failure assessment (SOFA) score. Methods Peripheral blood (PB) was collected from COVID-19 patients (n=10) and HC (n=10). EPs were purified from platelet-poor plasma by size exclusion chromatography (SEC) and ultrafiltration. Plasma cytokines and EPs were characterized by multiplex bead-based assay. Quantitative lipidomic profiling of EPs was performed by liquid chromatography/mass spectrometry combined with quadrupole time-of-flight (LC/MS Q-TOF). Innate lymphoid cells (ILC) were characterized by flow cytometry after co-cultures with HC-EPs or Co-19-EPs. Results We observed that EPs from severe COVID-19 patients: 1) display an altered surface signature as assessed by multiplex protein analysis; 2) are characterized by distinct lipidomic profiling; 3) show correlations between lipidomic profiling and disease aggressiveness scores; 4) fail to dampen type 2 innate lymphoid cells (ILC2) cytokine secretion. As a consequence, ILC2 from severe COVID-19 patients show a more activated phenotype due to the presence of Co-19-EPs. Discussion In summary, these data highlight that abnormal circulating EPs promote ILC2-driven inflammatory signals in severe COVID-19 patients and support further exploration to unravel the role of EPs (and EVs) in COVID-19 pathogenesis.
Introduction
Coronavirus disease 2019 (COVID-19) is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). Although most infected patients have mild to moderate symptoms or are even asymptomatic, older patients and those with pre-existing chronic diseases (e.g., hypertension, diabetes, obesity) are at greater risk of developing serious complications, such as pneumonia, cytokine storm and multiple organ failure (1,2).
Specifically, ILC play a pivotal role in immune surveillance and form the front line of immune defense. Natural killer cells (NK), one of the ILC subsets belonging to the group 1 ILC (15), are known to perform lytic functions, instead ILC1, ILC2, and ILC3 subsets have mainly helper functions through secretion of Type 1, Type 2 and Type 17 cytokines, respectively (16,17). In peripheral blood, an additional subset of ILCs has been identified and named ILC precursor (ILCP) because of its ability to give rise, both in vitro and in vivo, to all ILC subsets (18). ILC are largely depleted from the circulation of COVID-19 patients (13, 19). The remaining circulating ILC reveal decreased frequencies of ILC2 in severe COVID-19, with a concomitant decrease of ILCP, as compared with HC. ILC2 and ILCP show an activated phenotype with increased CD69 expression which is positively correlated with the levels of IL-6 and IL-10, while frequencies of ILC subsets are correlated with clinical and biochemical laboratory parameters associated with disease severity (19,20). However, the mechanism (s) leading to altered ILC activation and/or function in COVID-19 is yet to be determined.
Extracellular vesicles (EVs) are lipid bilayer structures with a key role within the inflammatory network. They are released from a broad variety of cells during homeostasis and cell activation with pleiotropic effects on cell-cell signaling, by transferring bioactive molecules into recipient cells or by regulating the downstream signal cascades of receptors on target cells. Based on size and biogenesis, small and large EVs can be identified. EVs contain functionally relevant biomolecules such as proteins, nucleic acids and lipids. They have been detected in various biological fluids including blood (21)(22)(23).
In this work, considering the EV identity defined by MISEV 2018 (22) and their heterogeneity, we collectively referred to them as extracellular particles (EPs). To further understand the impact of EPs on COVID-19 infection, here we studied the lipid cargo/ phenotype of EPs in COVID-19 patients and the functional activity of circulating EPs on ILC in severe COVID-19 patients as reported by the Graphical Abstract.
Materials and methods
Patients' characteristics Ten COVID-19 patients, admitted to the Intensive Care Unit of the IRCCS Azienda Ospedaliero-Universitaria di Bologna, were enrolled in the study. Patients were diagnosed with COVID-19 using reverse-transcriptase polymerase chain reaction viral detection of oropharyngeal or nasopharyngeal swabs. We considered only critical patients for this study with respiratory failure and admitted to the intensive care unit with the need for mechanical ventilation.
Demographic and laboratory findings of all recruited COVID-19 patients are summarized in Table 1. In addition to age, sex, hospitalization duration, clinical outcome and date/timing of peripheral blood sample collection, patients were assessed for the presence or the absence of the following pre-existing medical condition: lung disease (asthma, chronic obstructive pulmonary disease (COPD)), heart disease (coronary artery disease, heart failure), peripheral vascular disease, hypertension, diabetes, obesity (BMI >30), kidney disease, autoimmune disorders, cancer, chemotherapy for cancer. Laboratory parameters at the time of sample collection were analyzed and the serum levels of ferritin, Creactive protein, D-dimer, and lactate dehydrogenase were recorded for each patient as well as the number of white blood cells (WBC), platelets (PLT), hematocrit and hemoglobin.
Regarding COVID-19 disease severity parameters, the partial pressure of oxygen to fraction of inspired oxygen ratio (PaO 2 /FiO 2 ) and the sequential organ failure assessment (SOFA) score were recorded. The use of specific COVID-19-targeted treatment has been also reported. Specimens from anonymous pre-screened healthy blood controls (HC; n=10), matched for sex and age, were collected from the blood donor center. This study was approved by the Ethics Committee of the IRCSS Azienda Ospedaliero-Universitaria di Bologna and written informed consent was obtained from all patients/controls enrolled in the study.
Blood sample collection and plasma preparation
Venous EDTA-blood was kept vertically at room temperature and processed within 1 hour. After the first centrifugation of 15 min 2,500 x g at room temperature, plasma was collected and subjected to second centrifugation of 15 min 2,500 x g at room temperature to obtain platelet-free plasma. Platelet-free plasma was then stored at -80°C until use.
EP isolation
Samples were defrosted at room temperature and EP isolation was achieved by size-exclusion chromatography (SEC; qEVoriginal/ 70 nm Gen 2 Column, Izon) following the manufacturer's instructions. In brief, the column was equilibrated with PBS before loading the sample (500 µl) on top of the column. Next, four fractions were collected after void volume. Then, where indicated, EP-enriched fractions were pooled for maximizing yield for downstream experiments using MWCO 30 kDa Amicon Ultra-2 Centrifugal Filters (Millipore, Merck, USA). Finally, all samples were used or stored at -80°C until use. The protein content of the EPs was determined using the Bradford assay according to the manufacturer's instructions.
EP lipid extraction and LC/MS Q-TOF analysis
The EP lipidome was quantified using an untargeted lipidomic approach. Specifically, lipids were extracted from EP samples according to the one-phase extraction method described by (36) with minor modifications. In brief, 18mL of MMC extraction solvent was prepared by adding 5mL of methanol (MeOH), 6mL of chloroform (CHCl 3 ), 6mL of metil-t-butil etere (MTBE) and 1mL of Internal Standard mixture Splash I Lipidomix (Avanti Polar Lipids, USA) diluted 1:10 in MeOH. Each sample was added with 600 ml of MMC, vortexed for 10 seconds and shaken at 1600 rpm at 20°C in a T-Shaker (Euroclone). At the end, the tubes were centrifuged for 20 min at 16,000 x g at 4°C. The supernatant was transferred to a 1.5mL glass vial and flushed to dryness with a gentle stream of nitrogen. The residue was resuspended with 200 ml of a 9:1 mixture (MeOH/Toluene) and subjected to LC/MS Q-TOF analysis.
LC/MS Q-TOF analysis was carried out according to (37), after adaptation to the different instrumental configurations and using a 1260 Infinity II LC System coupled with an Agilent 6530 Q-TOF spectrometer (Agilent Technologies, Santa Clara, CA USA). Separation was carried out on a reverse phase C18 column (Agilent InfinityLab Poroshell 120 EC-C18, 3.0 × 100 mm, 2.7 µm) at 50°C and 0. replicates for each sample was used. The Agilent JetStream source operated as follows: Gas Temp (N2) 200°C, Drying Gas 10 L/min, Nebulizer 50 psi, Sheath Gas temp: 300°C at 12 L/min. MS/MS spectra were obtained using N2 at 30V CE. Acquired raw data were processed using the MS-DIAL software (4.48) (38) to perform peak-picking, alignment, annotation and quantification. Lipid annotation and q u a n t i fi c a t i o n w e r e c a r r i e d o u t a c c o r d i n g t o t h e recommendations of Lipid Standard Initiative (39).
At the end of the workflow, a data matrix containing the concentration in nmol/mL of the annotated lipids distributed over various lipid classes was obtained. The tool LipidOne was used to perform an in-depth analysis in lipid compositions, called lipid building blocks (40). The volcano plot and network graphs were created with Excel (Microsoft) and Graph Editor (https:// csacademy.com/app/graph_editor/) by processing the data obtained with LipidOne. MetaboAnalist 5.0 web platform was used to perform multivariate statistical and chemoinformatic analysis (41).
MACSPlex
MACSPlex analysis was performed using the MACSPlex Exosome Kit, human (Miltenyi Biotec, Bergisch-Gladbach, Germany) according to the manufacturer's instructions. Briefly, EP-enriched pool were diluted with MACSPlex buffer and MACSPlex Exosome Capture Beads were added. After overnight incubation at room temperature in agitation, MACSPlex Exosome Detection Reagent for CD9, CD63, and CD81 were added to each sample followed by incubation for 1 hour at room temperature. Flow cytometric analysis was carried out on a CytoFLEX flow cytometer followed by Kaluza Analysis 2.1 (Beckman and Coulter Life Sciences, CA, USA). Exosomal surface epitope expression (median APC fluorescence intensity) was then recorded. Median fluorescence intensity (MFI) was evaluated for each capture bead subsets and corrected by subtracting the respective MFI of blank control (PBS, vehicle) and normalized by the mean MFI of CD9, CD63, and CD81.
ILC2 were isolated by Fluorescence Activated Cell Sorting (FACS) on a FACS Aria III (BD) from HC and expanded in StemSpanTM Serum-Free Expansion Medium II (SFEMII, from STEMCELL Technologies) in the presence of IL-2 (100U/ml) and IL-7 (10ng/ml, both from PeproTech).
EP/ILC2 co-culture assay ILC2 were stimulated with a cytokine cocktail (IL-2, IP-10, IL-8, IL-6 at 20U/ml, 100ng/ml, 100ng/ml, 20ng/ml, respectively, PeproTech) alone or in combination with EPs isolated from either HC or COVID-19 patients. We set up co-cultures using a concentration of EPs ranging from 2 to 10 mg, that we tested not to kill the cells (data not shown). Supernatants were collected after 48 hours and cytokines were measured using a bead-based immunoassay flow assay, as stated above.
Statistics
All data are composed of at least three independent experiments. Data were analyzed with Graphpad Prism 9.4.1 for Windows (GraphPad Software, Inc., La Jolla, CA, USA). Due to the small sample size, the data were analyzed using the non-parametric Mann-Whitney test where two groups were compared and the nonparametric Kruskal-Wallis test followed by Dunn's posthoc test where more than two groups were compared. P-values ≤ 0.05 were considered statistically significant and are indicated in the graphs as reported by the analysis software: *p<0.05, **p<0.01, ***p<0.001, ****p<0.0001.
Multiplex protein analysis shows altered EP surface signatures in severe COVID-19 patients and reveals a Co-19-EP identikit
To detect any signal produced by cells in the circulation after COVID-19 infection, we firstly evaluated the proteins expressed on the surface of the plasma-derived EPs isolated from COVID-19 patients (Co-19-EPs) and HC (HC-EPs) using bead-based multiplex EV analysis. Considering the overall median fluorescence intensity (MFI) for specific EV markers (i.e., tetraspanins CD63, CD9 and CD81; Figure 1A) we observed that only EV-specific tetraspanin CD81 was significantly higher in COVID-19 patients than controls (P<0.05). Similarly, the epithelial cell adhesion molecule CD326 (EpCAM) was significantly higher (P<0.01) in EPs derived from COVID-19 patients than in controls ( Figure 1B). Conversely, among differentially expressed epitopes we also found lower expression for CD19, CD24 (B-cell related markers) and ROR1 (stemness marker) in EPs from patients than in controls ( Figures 1C-E). In particular, most of the immunological-related proteins (CD1c, CD2, CD4, CD11c, CD20, CD25, CD69, CD86, CD209) as well as the hemopoietic marker (CD45) showed either very low expression or were not detected in both groups. Overall, the graph reported in Supplementary Figure S2 shows the MFI for each marker detected.
Then, we tested the MFI of individual markers after normalization to the mean MFI of the specific EV markers (namely CD9, CD63, and CD81) (nMFI; Figure 1F). In addition to the above-described markers (ROR1 and CD24, respectively), we observed that Co-19-EPs differed from controls for the other three markers including CD9 (tetraspanin, P<0.01), HLA-DR/DP/DQ (MHC-II, leukocyte, P<0.05), and CD146 (endothelial, P<0.05). All of them were relatively lower in Co-19-EPs compared to HC- derived EPs. CD326 (EpCAM) expression was detected only on Co-19-EPs (P<0.01). Therefore, these data indicate that a phenotype-based signature on EPs may distinguish severe COVID-19 patients from HC suggesting further investigations on circulating EPs.
Circulating EPs from severe COVID-19 patients reveal abnormal lipidomic profiling Within EV cargos, lipids are suggested to be involved in EV formation and biological functions (42). To investigate the cargo of circulating Co-19-EPs and eventually how SARS-CoV-2 might influence their cargo, untargeted lipidomic analyses were performed on EPs isolated from the plasma of COVID-19 patients and HC. Lipidomic analysis revealed 1112 lipid species annotated at the molecular species level, grouped into 26 lipid classes. As reported in Table 2, we found that almost 70% of the EP-associated lipids are free fatty acids (FA) (28%), cholesteryl ester (CE) (16%), triacylglycerol (TG) (13%) and phosphatidylcholine (PC) (12%). The most significantly different lipid classes between patients and HC were FA, CE, TG, PC, ceramide (Cer), diacylglycerol (DG), lysophophatidylcholine (LPC) and sphingomyelin (SM). Indeed, the volcano plot showed that the amount of SM, HexCer, LPC and Cer is lower (P<0.001, respectively) while that of phosphatidylmethanol (PMeHO; P = 0.0008) and phosphatidic acid (PA; P = 0.0004) is higher in Co-19-EPs compared to HC-EPs (Figure 2A). Using known biosynthetic pathways as a reference in Figure 2B we show some metabolic pathways activated in COVID-19 patients. Interestingly, we found an inverse correlation between O-acyl-R-carnitine (CAR) and EP markers expressed on Co-19-EPs such as CD3, CD56, and HLA-ABC. Also, lysophosphatidylethanolamine (LPE) reported a negative correlation with the expression of CD4 on Co-19-EPs. By contrast, CE positively correlated with CD86 expression on Co-19-EPs whereas PG lipid class correlated with the exosomal expression of CD81 ( Figure 2C). In addition, we identified the presence of oxidized molecular species using the LipidOne analyses. We reported the oxidized/ unoxidized species ratio or the Ether/Esters linked ratio within each lipid class. The results are represented with the volcano plot showing that EPs from COVID-19 patients are enriched in selected lipid classes that contain oxidized lipid chains including p h o s p h a t i d y l e t h a n o l a m i n e ( P E ; P < 0 . 0 0 1 ) , P C a n d phosphatidylglycerol (PG) (P<0.01) ( Figure 2D). Conversely, considering the Ether/Ester ratio within the 27 lipid classes, only the phospholipids were found to contain ether bonds with enrichment on the PE class in HC (P<0.001; Figure 2E).
Overall, we demonstrated that lipidome from EPs distinguishes severe COVID-19 patients from HC.
Specific lipid species detected in circulating Co-19-EPs correlate with disease aggressiveness scores
To explore the clinical impact of EP lipidome in COVID-19 patients, we investigated whether the lipidome profile of EPs was associated with disease severity parameters including SOFA and PaO 2 /FiO 2 scores.
As stated above, we observed a strong depletion of SM lipid class in Co-19-EPs ( Figure 3). Interestingly, we found a negative correlation between SM class and SOFA score (r= -0.82, P = 0.009) (data not shown). Taking into account the individual lipid species, Spearman's correlation analysis revealed that patients with higher SOFA scores had low levels of several lipids' species including Table S1).
Correlation analysis was also performed to detect any association with PaO 2 /FiO 2 values. Regarding individual lipid species, we observed that Table S2).
Taken together these data indicate that lipidomic profiling refines the accuracy of disease aggressiveness assessment among severe COVID-19 patients and supports the hypothesis that selective lipid species might act as a prognostic tool for (severe) COVID-19 patients.
Co-19-EPs reduce the cytokine production ability of ILC2
ILC are lymphocytes known to respond to a large variety of stimuli, including cytokines, nutrients, neuropeptides and tumorderived factors (43,44). In particular, ILC2 were shown to be sensitive also to lipid mediators and EV stimulation (13,19). To understand whether ILC2 could respond differently to EPs from COVID-19 patients, we firstly analyzed the profile of circulating ILC of COVID-19 patients and HC. Specifically, as already shown by others, total ILC were decreased in COVID-19 patients in comparison to HC ( Figure 5A). Although we did not see any significant differences in the ILC subset distribution between COVID-19 patients and HC within the ILC2 subset, we found a significant decrease in the cKit high subpopulation, paralleled with a significant increase in the cKit low population, in COVID-19 patients (Figures 5B, C). Because the cKit low population has been proposed to be the ILC2 subpopulation more mature and fully committed (45,46), our findings suggest that in COVID-19 patients only the ILC2 subset specifically secreting type 2 cytokines is enriched.
Next, we evaluated a total of 21 cytokines in plasma samples from patients with severe COVID-19 and HC ( Figure 5D and Supplementary Figure S3). The plasma levels of IL-6 and IL-10 (P<0.0001, respectively) were significantly increased in patients with COVID-19 as compared to the HC ( Figure 5D). Similarly, the levels of IL-8 (P=0.005), IP-10 (P=0.035) and IL-5 (P=0.01; Figure 5E) were higher in COVID-19 patients compared to HC. Comparing survivors and non-survivors, only IL-5 plasma levels were significantly increased in non-survivor COVID-19 patients (P=0.045) ( Figure 5E). Furthermore, several Co-19-EP protein markers reported in Figure 1 were associated with plasma cytokine levels as reported in Figure 6. Most of the correlations reported were positive except for those between MFI CD3 with IL-2 and MFI SSEA-4 with IL-6. Of interest, among the significant plasma cytokines detected in COVID-19 patients, IL-8 showed a positive correlation with CD19, CD69 and ROR1 whereas IL-6 positively correlates with MFI CD24. Importantly, MFI CD63 expression on Co-19-EPs is linked to IL-5 plasma levels, the only cytokine different between survivor and non-survivor patients Figures 5E, 6.
To understand whether the combination of the proinflammatory cytokines IL-6, IL-8 and IP-10 together with the EPs isolated from HC and COVID-19 patients could impact the cytokine secretion ability of ILC2, we isolated and expanded human ILC2 from HC in vitro and stimulated them with IL-6, IL-8, IP-10 alone or in the presence of either HC-EPs or Co-19-EPs. We found that, while the EPs from HC inhibit IL-5 and IL-10 production, the EPs from COVID-19 patients failed in downregulating these two cytokines, suggesting that the composition of the EPs isolated from COVID-19 patients was supporting the ILC2 activation status (Figures 7A, B). Indeed, when we compared the phenotype of ILC2 present in the PBMC of HC and COVID-19 patients, we found that COVID-19 patients' ILC2 showed a more activated phenotype characterized by an increase in CD38 and CD69 expression and a trend for increased NKG2D ( Figure 7C). CD38 upregulation was present both in the ILC2 cKit low and cKit high ( Figure 7D-E).
Altogether these data highlight that the presence of a wellknown inflammatory microenvironment in severe COVID-19 patients might be reflected in more activated ILC2 producing high concentrations of both IL-5 and IL-10. Interestingly, at variance with HC-EPs, Co-19-EPs are unable to dampen the activation status of ILC2 in severe COVID-19 patients. Correlation coefficient matrix heat map reporting any associations of plasma cytokines from COVID-19 patients (n=10). In figure, the color map with double gradient for Spearman correlation coefficient (significant values represented in purple for positive correlations and in light violet for negative correlations).
Discussion
In this study, we identify an EP-associated lipidomic and phenotypic signature of SARS-CoV-2 infected patients with severe disease. Most important, the EP lipid and their protein patterns are associated with the disease aggressiveness scores, highlighting the putative role of Co-19-EPs as a prognostic biomarker cargo in severe COVID-19 patients. Interestingly, critical correlations between EP profile, lipidomic cargo and the immune-inflammatory microenvironment have been found. In addition, these circulating Co-19-EPs are unable to dampen the activated phenotype (as assessed by the ability to produce IL-5 and IL-10) of ILC2 isolated from HC. Despite the limitations of the small number of patients, we developed a valuable method for detecting the effects of SARS-CoV-2 infection and novel insight for studying EPs in infectious diseases.
It has recently been described that EVs might play a role in the host response to SARS-CoV2 infection (47, 48). In the present study, the lipidomic analysis of the EPs from COVID-19 patients enrolled demonstrates the reduced expression of sphingomyelins. Together with glycerophospholipids, sphingolipids are important components of the cell membrane and regulate several processes, such as proliferation and inflammatory responses (49,50). Moreover, sphingolipid metabolism is involved in exosome secretion (51). Inhibition of SM synthesis has been reported to slow Golgi-to-plasma membrane trafficking of vesicular stomatitis virus G protein, influenza hemagglutinin, and pancreatic adenocarcinoma up-regulated factor suggesting that the SM biosynthetic pathway is broadly required for secretory competence (52). Therefore, SM metabolism can be a potential biomarker for identifying crucial vulnerabilities in COVID-19 patients and a potential target for therapeutic intervention against COVID-19 virus infection. In addition, in this study, Hex Cer and several SM reveal an opposite association with SOFA score which demonstrated that the depletion of sphingolipid species may be closely related to the severity of the disease. We also observe the relative abundance of lipids involved in energy storage, such as triacyl-and diacylglycerols (TG and DG) in EPs from COVID-19 patients. TGs are the most abundant lipids in the human body and are the major source of energy that constitutes a critical component of the lipoproteins (50,53). Of note, specific TGs (including TG 40:1 and TG 40:2) are selectively increased in non-survivor subjects. In line with this, a myriad of cardiovascular manifestations are observed in COVID-19 patients (54) and, based on the role of lipoproteins in thrombosis, our data may suggest an association between increased TG levels in EPs and cardiovascular events in COVID-19 patients.
The EP surface proteins were also investigated. We found that the exosome markers CD9, CD63 and CD81 are present in EPs isolated from all groups. Among differentially expressed proteins, CD24, CD146 and CD326 show remarkably higher expressions in EPs from COVID-19 patients.
CD24 is highly expressed by immune cells and cancer cells and it is known to play an inhibitory role in B-cell activation responses and the control of autoimmunity (55). It has recently been described that CD24 stimulation of B cells may trigger a transfer of receptors functional in recipient cells via EVs (56).
CD146, a membrane and immunoglobulin superfamily protein that is normally expressed by endothelial cells and Th17 cells, promotes the adhesion, rolling and extravasation of lymphocytes and monocytes across the endothelium. Indeed, functionally, CD146 is involved in angiogenesis and inflammation (57,58).
Finally, CD326 is an adhesion molecule that is characteristic of some epithelia and many carcinomas and has been implicated in intercellular adhesion and metastasis (59). It has recently been described to play a role in coagulopathy (60,61). Overall, the phenotype of circulating EPs from severe COVID-19 patients suggests that the hyperexpression of these EV biomarkers might contribute to affecting the immune response and the inflammatory microenvironment. For instance, it is worth noticing that it has been previously reported a link between the LPC and T cell homeostatic turnover (62). Herein, we find a direct association between Co-19-EPs expressing CD8 and two specific lipid species (LPC O-16:1 and LPC O-18:1), suggesting a defective role in the release of EV by CD8 + memory T cells in COVID-19 patients (62). This hypothesis is also confirmed by a corresponding association with the PaO 2 /FiO 2 failure score that show a more aggressive disease in the patients with lower MFI for CD8 in Co-19-EPs.
The challenge of the COVID-19 pandemic is to predict intensive care admission or death of COVID-19 patients. Based on the explorative data of this study, in-depth phenotype and lipidome profiling of EP could be considered novel tools for better stratification of the patients, helping selection and decision for clinical studies, or avoiding the risk of therapy-related complications. Since few data concerning the role of EPs and their lipid-associated cargo in COVID-19 are available, further studies, combining lipidomic data with biological and immunological characterization, may help to elucidate specific (immuno) pathogenetic mechanisms and identify novel treatment strategies for virus infections. Importantly, considering EV trafficking, the lipid composition of EV membranes may play a role in the stability of these vesicles as well as facilitating binding to and uptake into recipient cells such as immune cells.
Along with lipidomic and surface protein expression, we also performed co-culture experiments with circulating EPs from HC or COVID-19 patients using ILC2 as a target. In line with others, we find that the ILC2 from severe COVID-19 patients show a more activated phenotype in terms of CD38, CD69 and a trend for NKG2D expression; the latter marker was already shown to be upregulated in patients showing no need for mechanical ventilation and a shorter hospitalization (63). Our data suggest that the activated phenotype of ILC2 as well as the higher capacity of producing IL-5 and IL-10 might be linked with the different cargo of the circulating EPs. Indeed, only HC-EPs are efficient in suppressing the ILC2 cytokine secretion capacity, while Co-19-EPs lose this property. Whether this inhibitory capacity is due to the different lipidic or protein composition of the EPs is yet to be investigated. Overall, although EVs may represent a mechanism by w h i c h t h e SARS-CoV-2 escapes the immune system, our data indicate that circulating EPs may alarm the innate immune system by modifying the production of inflammatory cytokines. These findings shed light on the diverse effects of circulating EPs on the inflammatory/ immune response of COVID-19. Consistently, we found the plasma levels of IL-5, IL-6, IL-8, IL-10 and IP-10 to be significantly higher in severe COVID-19 patients compared with control plasma. Previous data identified IL-10 and IP-10 as putative biomarkers associated with poor outcomes. In this regard, IL-10 has been shown as a putative regulator of COVID-19 pathogenesis in association with IL-6 (64), whereas IP-10 has been investigated for its role in thrombosis in COVID-19 patients (65). For instance, IP-10 is secreted by many cell types in response to interferon-gamma IFN-including monocytes, endothelial cells and fibroblasts (66) and acts as a chemotactic agent for immune cells such as T cells, NK cells, monocytes/macrophages and dendritic cells (65). In addition, IL-6, TNF-a and IL-8 were considered strong and independent markers for patient survival (67)(68)(69). Notably, Li L et al. showed both IL-8 and IL-6 as biomarkers of disease prognosis for COVID-19 patients suggesting them as putative therapeutic targets (69). Of interest, IL-5 plays a crucial role in our cohort being significantly different not only between HC and COVID-19 patients but also between non-survivor and survivor patients. It is already known the role of IL-5 in the growth, survival, and activation of eosinophils (70). Despite we do not find any association between IL-5 and the absolute eosinophil count of our patients (data not shown), our results suggest that the Type 2 immune response is involved and may be aggravated by SARS-CoV-2-induced pneumonia (70). Following previous work (71), EVs from subcutaneous immunotherapy-treated mice exert effects on IL-5 production from ILC2 suggesting novel therapeutic options using EVs. Indeed, it has been also demonstrated the potential for using EVs as powerful and feasible cargo for drug delivery (72). The natural origin of EVs enables them to reduce immunogenicity compared with existing delivery systems. Thus, an EV-based drug delivery system may be an attractive candidate to manipulate also the cytokines secretion by specific cell subsets for a novel effective treatment for COVID-19. Overall, our data on circulating cytokines confirm and highlight the complex immune/inflammatory network of COVID-19 pathogenesis and suggest that blocking one cytokine alone could be an ineffective strategy (64, 67, 73).
At last, even though these findings depict an "EP signature" of severe COVID-19 patients, it should be highlighted that at the time of sample collection severe COVID-19 patients were under treatment; therefore, we can not rule out the possibility that treatment might have influenced the EP pattern.
In summary, this study demonstrates that a distinct lipidomic and phenotypic signature characterizes EPs in severe COVID-19 patients. In addition, this study shed light on the mechanisms by which circulating EPs modulate the innate immune response. With the limitations related to the small cohort of COVID-19 patients included, these findings might have the potential for prognostic implications of EPs in severe COVID-19 patients. Since future and innovative therapeutic approaches in the COVID-19 current scenario may rely on signals carried by Co-19-EPs, our data represent a step toward the identification of a Co-19-EP-specific pattern of secret signals released in circulation in COVID-19 patients.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Comitato Etico di Area Vasta Emilia Centro della Regione Emilia-Romagna (CE-AVEC) (377/2020/Oss/AOUBo). The patients/participants provided their written informed consent to participate in this study. | 2023-05-03T13:05:17.867Z | 2023-05-03T00:00:00.000 | {
"year": 2023,
"sha1": "df15e2d543e36521ecd05036892460b9839bf297",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "df15e2d543e36521ecd05036892460b9839bf297",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
248398472 | pes2o/s2orc | v3-fos-license | Effects of Psychological Benefits of Greenness on Airlines’ Customer Experiential Satisfaction, Service Fairness, Alternative Attractiveness, and Switching Intention
In the context of climate change, this study uncovers the role of green airlines’ social responsibility in conjunction with the consumers’ switching behavior while considering the effects of latent variables, including green psychology, airline corporate image, green experimental behavior, green service fairness, green alternative attractiveness and switching intention, were examined in the study. In a highly competitive service environment, an organization needs to understand how passengers perceive its corporate image, satisfaction, fairness attractiveness, and behavior of switching intention. The predicted relationship was based on partial least squares structural equation modeling of a convenience sample of 615 valid datasets collected from individuals who used green airline services in China. The findings show that the psychological benefit of greenness, only warm glow, is the main driver of airline corporate image. Furthermore, airline corporate image, green service fairness, and green alternative attractiveness support passengers’ green experiential satisfaction. The evidence demonstrates that green experiential satisfaction and green alternative attractiveness have significantly positive effects on switching intention. However, green service fairness has no significant effect on green switching intention. This study contributes to the literature by understanding airline customers’ perception of the complex relationship in the green constructs. This finding can help marketers facilitate and develop their external communication and craft their image to retain their existing or potential customers.
INTRODUCTION
Over the last two decades, global warming and environmental pollution have created a high awareness in the hospitality industry. The attention for environmental protection continues to rise (Trenberth et al., 2014;Singh and Sharma, 2017). The hospitality sector plays a role in social responsibility and commits to protecting the environment as one of its marketing strategies. The aviation industry is the basic and leading industry of economic development and an important carrier for promoting the development of the tourism economy. Hospitality products are wrapped in a green image to provide psychological benefits, hoping to provide the customers with satisfactory values and needs .
As more environmental regulations have been implemented and individual environmental awareness has increased, an increasing number of travelers have been searching and buying green tourism products (Han et al., 2011a;Han and Hwang, 2017). Many customers are willing to buy more environmentally friendly products or services provided by the hospitality sector (Liu et al., 2017). Green airline products have gained popularity in the tourism market, and green airlines' sustainable practices may affect the competitiveness in the market (Khare, 2014). Rajiani and Kot (2018) stated that new practices in familiarizing green air traveling deliver comfort for career-oriented clusters. These clusters are presented as active, calculating, and rational consumers who carefully allocate scarce resource clusters until their cost and benefit analysis is confirmed satisfactory.
Airlines need to create a stable performance. They are not limited to seeking new opportunities, they also need to create highly innovative products and services (Kong and Ibrahim, 2018). Previous scholars identify organization with capacity to innovate will be able to speed up environmental challenges and better than those non-innovated organization (Miles and Snow, 1978;Cingöz and Akdogan, 2013). Organization gain innovative service, it may enhance the service processes and increasing competitiveness among the industry. As a result, they provide effective service quality and benefits to their customers and come up with a strong relationship and customer retention. They avoid switching to other competitors (Kong and Ibrahim, 2018). Many airlines provide innovative services by adopting a green image and the psychological benefits from the green products indicate trust and reliability to the potential buyers. Product satisfaction may lead to buying decisions (Tran, 2020). Although the issues seem important, research on the psychological benefits of greenness, which affect airline travelers' switching behavior, is limited. The extent of customer experiential satisfaction, service fairness, and green alternative attractiveness that may have the potential to lead to switching intention is also under discussion. Previous scholars have highlighted that green images are associated with experiential satisfaction. They have demonstrated that green experiential satisfaction plays a key role in influencing satisfaction (Wu et al., 2016Wu and Cheng, 2019) and identified that green experiential satisfaction may influence green switching behavior in the tourism products. Nikhashemi et al. (2017) identified that switching intentions are predictors of switching behavior, indicating that switching intentions positively influence satisfaction and switching behavior. However, limited studies have paid attention to green service fairness and green alternative attractiveness, which may potentially affect green experiential satisfaction and switching intention. The current academic research on the measures of carbon emission reduction in airlines has mainly focused on technology (Lou et al., 2015) and policy (Hwang and Choi, 2018). Other studies have focused only on the green perception of the choice of the airlines (Hagmann et al., 2015). Carr (2007) argued that in relational service contexts, customers' perception of service fairness is vital for satisfaction. Service fairness may lead to satisfaction and service encounters (Olsen and Johnson, 2003;Zhu and Chen, 2012). Nysveen et al. (2018) highlighted that service fairness has a relationship with switching behavior. However, in the airline sector, the importance of green alternative attractiveness, which may potentially affect green experiential satisfaction and switching intention, has not been mentioned in the studies. In the context of green airlines business, knowledge of the relationship between green airline switching intention and psychological benefits of greenness with experiential satisfaction, green service fairness, and green alternative attractiveness, are limited. Vuong et al. (2022) highlighted that knowledge management from social sciences plays a pivotal role in positively changing human behavior and suggested evidence-based policymaking in communication. It can attain more insights into perceptions and positive buying decisions (Chan et al., 2021).
Airline practitioners should understand the consequences of switching behavior in green airline services to provide all the benefits and service satisfaction. Therefore, this study aims to fill the aforementioned research gaps and is expected to achieve the following objectives: • To examine the relationship between the psychological benefit of greenness and airline corporate image. • To explore the relationship between airline corporate image to green experiential satisfaction and switching behavior. • To investigate the constructs of dimensions of green experiential satisfaction, green service fairness, green alternative attractiveness, and airline switching intention perceived by airline customers.
The study contributes by extending the green theory from theoretical and practical perspectives to the airline sector. From the theoretical approach, the study proposes a unique construct, which includes psychological benefits affecting airlines' green image, airlines' green image affecting green experiential satisfaction and airline switching behavior. The dimension of the constructs also examines the green experiential satisfaction, green service fairness, green alternative attractiveness, and airline switching behavior, incorporated in a relevant marketing model. The study's unique contribution is to gain an understanding of the airline customers' perception of the complex relationship in green constructs. From the practical perspective, industry practitioners may understand how customers perceive airlines' green benefits, corporate image, green experiential satisfaction, service fairness, alternative attractiveness, and switching intention. The study can help the airlines to stimulate green switching behavior through the five determinants mentioned (i.e., psychological benefits of greenness, green corporate image, green experiential satisfaction, green service fairness, and green alternative attractiveness). This finding can help marketers facilitate and develop their external communication and craft their image to retain their existing or potential customers.
Green Airline Marketing
Green airline marketing has emerged with the growing environmental awareness across all levels of society and with the rise in the segment of green consumers (Kumar, 2016). The role of market communication concerning the eco-positioning of brands or corporations is important in the sustainable marketing mix (Khoo and Teoh, 2014;Migdadi, 2020). An organization can influence its green brand positioning by actively communicating with the environmental attributes of the brand in comparison with the competitors' brands (Hartmann et al., 2005). Khoo and Teoh (2014) noted that most of the aircraft emission level is affected by the aircraft load factor, fuel efficiency, cabin density configuration, aircraft size, and service frequency. Based on the effectiveness of the airlines' green operation strategies, Migdadi (2020) developed an effective categorized pattern with low, low-to-moderate, and high emphasizer patterns through fuel-saving actions (i.e., flight route management and flight weight management), energy-saving actions (i.e., upgrading and replacing of facilities, vehicle and energy design, and transportation management), waste management and recycling actions (i.e., recycling, upcycling, and reusing waste), and water management actions.
In addition, utilizing advertising and the whole communication mix to address environmental credibility and concern can also be regarded as useful tools in creating positive eco-positioning among air travelers. Peattie (2010) stated that individual psychographic concerns of environmental protection would translate into changed consumption behavior, which is relatively consistent across different consumption spheres. However, as consumers often consider the premium price for an environmentally superior product, marketing green products and services requires different strategies than the traditional ones (Dangelico and Vocalelli, 2017). Although green products are crucial to the environment, green airlines' psychological benefits to customers are continuously being ignored. The psychological benefits of green content create an advantage for consumers. On the other hand, this content can let customers participate in protecting the environment Trang et al., 2019). Therefore, further examining and having a better understanding of how customers perceive green benefits are worthwhile for organizations to meet the satisfaction of the targeted green customers.
Airline Corporate Image
The overall brand image indicates the global and general beliefs and perceptions that patrons develop based on diverse sources from the acquired and processed information on a particular brand (Assael, 1984;Han et al., 2019).
Green brand image has become a popular topic in society. To obtain this image, a brand should be able to differentiate itself from other brands through consistent and dedicated activities designed for green actions (Lin and Zhou, 2020). An organization can provide five desired benefits to develop green marketing, namely complying with environmental pressures, raising corporation competitiveness, enhancing the corporate image of a company, seeking a new market for opportunities, and enhancing product value (Chen, 2010;Singh and Pandey, 2012). Consumers perceive a green brand image as a correlation between a brand with environmental commitments and environmental concerns (Chen, 2010). Hinnen et al. (2017) stated that consumers' willingness to pay a premium price for greenness contributes to the corporations in air travel when costeffectiveness exists compared with purchasing a regular product. A previous study also indicated that the green market segment should focus on strong competitive advantages for products and services, including quality and prices (Borin et al., 2013). Therefore, understanding the customers' perception of green airlines may provide an advantage for airline practitioners in promoting their services.
Green Service Innovation and Customer Satisfaction
The competition in the Airlines industry, without service innovation, is a serious threat for the industry. The growing customer acquisition costs and increased customer expectations need the airlines to create value in the way of adopting innovative services as a response to the increasing competitive pressure and developing service innovation to ensure customer satisfaction and retention (Kong and Ibrahim, 2018). Service innovation is defined as the process of developing and releasing a new or important product or service to meet customer needs and wants (Al-Otaibi and Al-Zahrani, 2009). The green concept adds value to the environment, enlarges the green tourism products, and establishes a new management system (Zainal Abidin et al., 2011). The airlines adopted a green management system and changed their business practices and increased external communication (Rajapathirana and Hui, 2018). The change in service innovation may affect the technical and social systems of an organization. The change might affect their performance and help the customers better understand the types of capabilities that can result in competitive advantages.
The rationale of service itself comes up with "product" and "process" for service and manufacturing (Tether et al., 2002). When design a service innovation for tourism organization as multidisciplinary process of designing, realizing, and marketing combination of existing new service and products. The major task is creating value for customer experience and providing benefit to the service organization.
Very often, customers evaluate the service product provided by the tourism organization after they purchase the product (Choi and Kim, 2013). Davis and Heineke (1998) highlighted the conceptual idea of the "confirmation/disconfirmation" theory of customer satisfaction. The customers' level of satisfaction and their perception of tourism organization performance may have a relationship in their future buying decision. Create big gaps found between their perception and actual performance, it may lead to dissatisfaction and repetitive of the service or even lead to switching behavior (Palawatta, 2015). According to Hilal (2015), identifying the introduction of new innovative services may improve the service productivity and need to ensure the innovative product and service that their appropriately price to attract and provide satisfaction to customer. Other studies found customer satisfaction in relation to service innovation and customer value (Weng et al., 2012). Therefore, green airlines' psychological benefits in innovation product cannot be ignored. Bhattacharya and Sen (2004) categorized green marketing theories and group consumer-level theories into six categories, namely value and knowledge, beliefs, attitudes, intentions, motivations, and social confirmation. Among the existing models, the theory of reasoned action (TRA) and the related theory of planned behavior (TPB) are most commonly applied to green consumerism (Peattie, 2010). TPB is the extension theory of TRA. The fundamental tenet of the TPB is that individuals tend to make reasoned choices and choose alternatives that have the highest benefits and least costs or negative effects to themselves (Li and Wu, 2019). Ajzen (1991) stated that behaviors are shaped by intentions, which, in turn, are driven by consumers' attitudes, subjective norms, and perceived behavioral control. Chen (2016) also stated that TPB is successful in forecasting and interpreting individuals' intentions and behavior in a wide range of environmental causes. However, some scholars have argued that TRA and TPB still have limitations in several aspects. A large intention with limited reaction highlights the contrast of traditional TPB, which may ignore other essential factors, such as unconscious motivation, spontaneous choices, and external temptation. Therefore, researchers have attempted to modify the model through extended variables or models and enhance its strength in explaining TPB (Chen, 2016;Li and Wu, 2019).
Modeling Green Customer Behavior
In addition, based on the green marketing category of consumers' intention, the rational choice theory, consumer choice theory, and acquisition-transaction utility theory explicitly focus on economic intentions (Bhattacharya and Sen, 2004). Intentions are present as the predominant individual desires and are initially formed in thoughts before they can be achieved. Notably, however, green airline marketing mainly investigates positive economic intentions, such as revisit, purchase, and repurchase intentions after satisfying the consumers' needs. However, some scholars have argued that the judgment of consumer satisfaction should be divided into positive and negative emotions. Moreover, they are not only opposite in concept but also two extremes in an independent space. Martins et al. (2013) presented that satisfaction has an inverse influence on the switching intention, indicating that satisfied consumers are less likely to switch than unhappy ones. Satisfaction is considered a function of the perceived performance relative to the consumers' prior expectations (Chiesa et al., 2020).
Psychological Benefits for Green Users
Many scholars have identified using green services that provide green benefits and a more environmentally friendly approach (Wu et al., 2016;Xie et al., 2019). The previous scholars have defined the concept of psychological benefits using environment services, indicating spiritual benefit and comfort for customers using airline services (Gwinner et al., 1998;Hwang et al., 2019). Hartmann and Apaolaza-Ibáñez (2009) highlighted the psychological benefits, including warm glow, self-expressive benefits, and nature experience. When people think that they are concerned for the environment, they might create awareness of warm glow and take social responsibility. Spielmann (2020) recently posited that warm glow is the customers' thinking that they will be rewarded for their environmentally friendly behavior, taken as intrinsic satisfaction.
Self-expressive can be defined as customers' benefit to signal concerns about environmental problems. Customers hope to express themselves to protect the environment and more likely want to travel in a green airline, giving them a high level of satisfaction and self-expressive benefit (Hartmann and Ibanez, 2006;Hu, 2012).
Nature experience is a vital element in psychological benefits. Understanding the nature experience and the enhancement of well-being perception are important. Thus, people spend time in a natural setting, and they can recover from stress during their short stay in the natural environment (Hwang et al., 2019). Customers have a high awareness of nature experience, and they will more likely prefer green services or choose green airlines (Hartmann and Apaolaza-Ibáñez, 2012). Therefore, the above three elements provide support to psychological benefits to green users. Given a large number of benefits, many customers may not want to switch their environmental-friendly products . To maintain the competitiveness of the service environment, airlines need to utilize green branding and decrease customer switching behavior for environmental concerns (Chen, 2010) and retain their loyalty to green customers.
Green Switching Intention and Satisfaction
In the service context, previous scholars have identified trust and satisfaction as constituting relationship quality (Farooqi, 2014). The former is more likely in an eco-friendly organization, whereas the latter is with the suppliers. For example, hospitality customers will evaluate the environmental-friendly context based on their experience and satisfaction. Jung and Yoon (2012) proposed that the variation needs have moderated the effect of satisfaction on switching intentions. Conversely, Setiyaningrum (2006) claimed that the variation needs do not moderate the effect of satisfaction on switching intentions when applied to different services.
Customer satisfaction is crucial in marketing theory. A service organization needs to provide to customers' needs and desires. Customers make judgments in terms of service features and their attributes (Back and Parks, 2003). If the performance exceeds customers' expectations, they will be satisfied, otherwise, they will be displeased with the services or switch to another service provider. Han et al. (2011b) mentioned that a customerperceived service organization's lack of attractive alternatives is an important constraint on the customers' switching intention. Service switching will harm customer loyalty, retention, and repurchase intentions. Customer loyalty indicates a customer's mindset of customer value and company resources and skills. Organizations can provide high service quality skills and motivate consumers to strengthen their relationship with their service provider (Hess et al., 2003;Bell et al., 2005). Anton et al.'s (2005, p. 139) proposal that an organization's green commitment can be understood as the desire to develop and maintain long-term exchange relationships-a desire that materializes in the realization of implicit and explicit promises as well as their sacrifices, and the economic and social well-being of the parties having some interest in the relationship. An airline's green commitments can gain the interest of the customers and offer frequent communication and information. Consumers can obtain more information and foster their loyalty, and as a result, they will not switch their loyalties. Some cases indicate that organizations keep customer loyalty so they do not contemplate other competitors (Wathne et al., 2001).
Furthermore, the high quality of the products can motivate customer loyalty, but price fairness is the reason which leads to switching intentions. According to Keaveney (1995), customers switch because they are dissatisfied with the price they paid. They may feel the price is unfair or they might have other options of fair pricing. Therefore, the price-related issue is one of the issues which leads to switching behavior.
Therefore, the present study provides a great understanding of the psychological green benefits linked to an airline's corporate image and its green experiential satisfaction, green service fairness, green alternative attractiveness, and switching intention.
Research Model and Hypothesis Development
Based on the above discussion, Study proposes a conceptual framework for this study (Figure 1). We use a multidimensional model indicating an airline providing psychological benefits of greenness, airline corporate image, green experiential satisfaction, green service fairness, green alternative attractiveness, and airline switching intention.
Psychological Benefit of Greenness
Psychological benefit is regarded as the post-purchase behavior, which is defined as an individual's spiritual comfort generated after buying a brand's products and services (Hwang et al., 2019). Vuong (2021) highlighted that environmental value will reshape human behavior in the business sector. The previous studies have provided three dimensions to measure the psychological benefits for green brands, namely warm glow, self-experience, and nature experience (Hwang and Choi, 2017;Lin et al., 2017b;Hwang et al., 2019;Liao et al., 2019).
Warm Glow
The warm glow of giving posits that impure altruism can motivate individuals to contribute to the public good through pro-environmental behavior, which is supported by the pro-social behavior theory (Aaker, 1999;Hwang et al., 2019). This concept has received increasing interest in the green brand domain (Lin et al., 2017b). Hwang and Choi (2017) confirmed that warm glow has a positive influence on the overall brand image. Similar research has also indicated the importance of warm glow factors, which affect consumers' psychological attitudes positively toward the use of brands (Hartmann and Apaolaza-Ibáñez, 2012;Liao et al., 2019;Boobalan et al., 2021). Based on the above discussion, we propose the following: H1: Warm glow has a positive influence on the green airline corporate image.
Self-Expressive Benefit
The self-expressive benefit concept is based on the signaling theory, which states that individuals discover psychological benefits through self-expressiveness. Correspondingly, they tend to indirectly express the preferred information to others (Ahmad and Thyagaraj, 2015;Boobalan et al., 2021).
A brand's benefits refer to the consumers' perceptions of a brand based on what they can attain for the product attributes (Lin et al., 2017a). Functional brand benefits are usually correlated with consumers' functional needs to easily develop positive brand attitude for consumers (Lin et al., 2017a). For instance, customers are comforted by the tendency to jointly protect the environment for sustainable development and convey positive information. In this case, individuals are more likely to have a high level of self-expressive benefits and provide a positive attitude toward high-signaling products or services labeled in "green, " "eco, " or "sustainability" (Lin et al., 2017a;Hwang et al., 2019). Aaker (1999) noted that as individuals act differently in varying situations, the empirical support of self-expressive research should be based on context. Studies in different areas have reported conflicting findings. For instance, Hartmann and Apaolaza-Ibáñez (2012) could not find the linkage between selfexpressive benefits and general brand attitude in the context of brands linked to the supply of electricity. However, fields such as emotional nature experience (Hartmann and Apaolaza-Ibáñez, 2008), charity (Andreoni, 1989), and energy-saving appliances (Liao et al., 2019) have a positive linkage. This study examines self-expressive benefits and green brand image under a green airline scenario. H2: Self-expressive benefits have a positive influence on green airline corporate image. Hwang et al. (2019) posited that nature experience serves as the most important psychological benefit in an eco-friendly topic. Nature experience has been investigated in diverse fields, such as the dimension of experience and tourists' purchase intentions (Jamrozy and Lawonk, 2017), experience and worth of money experience, and satisfaction (Gallarza and Saura, 2006;Williams and Soutar, 2009;Wu et al., 2014;Sharma and Nayak, 2019). Moreover, the satisfaction of a brand image in an eco-airline domain was highly limited in the previous studies. However, individuals perceiving a high level of nature experience tend to have a positive thinking toward the corporate brand image (Hwang and Choi, 2017). Thus, this study proposes the following hypothesis:
Nature Experience
H3: Nature experience has a positive influence on the green airline corporate image. stated that the concept of green experiential satisfaction is the novel concept that evaluates consumers' overall experience satisfaction based on their experience places. Nysveen et al. (2018) found strong empirical support for the relevance of perceived green image and experience (tested through sensory experience, affective experience, cognitive experience, relational experience, and behavioral domains) in the hotel sector. Gelderman et al. (2021) found that the green corporate image has a significant effect on consumer satisfaction in the businessto-business context. Furthermore, green product quality, green product price, and salespersons' green expertise have shown positive effects on green experiential satisfaction. Han et al. (2018) confirmed that experiential satisfaction on airport dutyfree shopping has a positive influence on consumer loyalty. In addition, the satisfaction with airport duty-free shopping has significant associations with purchase desire in duty-free shops.
Airline Corporate Image and Green Experiential Satisfaction
The previous studies have found that green brand image produces some valuable outcomes, such as green trust, satisfaction and brand equity, word-of-mouth intention, and green competitive advantage (Lin and Zhou, 2020). Lin and Zhou (2020) noted that the utilitarian environmental benefit and green brand innovations have a direct effect on green brand image. Psychological benefits (e.g., warm glow, self-expressive benefits, and nature experience) ensure the perceptual effects in green brand positioning (Hwang and Choi, 2017;Lin et al., 2017a;Hwang et al., 2019). Generally, functional benefits' cognitive and affective brand attributes affect consumers' judgment of the overall image. Thus, green brand image can affect consumer satisfaction and loyalty (Hartmann and Apaolaza-Ibáñez, 2008;Lin et al., 2017a).
H4: Green airline corporate image has a positive influence on green experiential satisfaction. Park et al. (2015) mentioned that experiential satisfaction can be measured by consumers' favorable and unfavorable factors. Favorable factors, such as word-of-mouth communication, purchase intention, and price sensitivity, are conducted in various academic fields and have been investigated by many researchers (Chang and Fong, 2010;Park et al., 2015). In comparison, unfavorable factors, such as complaints and switching intentions, have been investigated by few researchers .
Green Airline Experiential Satisfaction and Switching Intention
Switching intention is defined as the possibility of transferring consumers' existing transactions with an organization to a competitor (Liang et al., 2018). found that satisfaction has a direct influence on the switching intention in tourism destinations. Tran (2020) highlighted that the consumers' perceived risk toward an organization reduces consumers' perceived satisfaction. The decrease in consumers' perceived risks toward an organization may increase their purchase intention. Therefore, consumer perceived satisfaction was the most crucial factors that lead to purchase intention. Liang et al. (2018) considered satisfaction in the AirBnB concept and confirmed that transaction-and experience-based satisfaction directly influence switching intention. Based on these studies, if customers are satisfied with their experience, then their intention to switch will be lesser. Therefore, the present study proposes the following hypothesis: H5: Green experiential satisfaction has a negative influence on airline switching intention.
Green Service Fairness, Green Experiential Satisfaction, and Switching Intention
The concepts of service fairness and equity, which originated from the equity theory and are extensively used, can be used synonymously (Cappelli and Sherer, 1988;. Consumers should equally divide products and services in terms of their satisfaction and what they deserve . Through consumers' previous flight experience or other replacement transportations, service fairness is a comprehensive judgment based on their fairness standards (Seiders and Berry, 1998). Sarker et al. (2019) concluded that the previous literature regarded consumer-based brand equity (CBBE) and conceptualized consumer-based service brand equity (CBSBE) models, which are suited for the Airlines industry. They claimed that the concept of service brand equity for consumer flight experience is a composite mission. This mission makes practitioners pay attention to consumers' understanding, feelings, and brand stimuli, such as in-flight services, employee interactions, and price, which have not been presented in the CBBE model. Therefore, they composed a more comprehensive model to examine airline service equity, that is, stimuli (e.g., airline service and direct service experiences, and brand consistency) and organism (brand awareness, brand meaning, and perceived value).
Recent studies have found that service fairness plays an essential role in satisfaction and switching intention aspects. Wu et al. (2016) summarized the previous literature and concluded that research switching intention, experiential satisfaction, and equity have been investigated in the restaurant, hotel, golf, chain restaurant, tourism, and tourist destination fields. Setiawan et al. (2020) stated that price fairness is based on consumers' expectations on the price that suit equivalent service quality and are even fairer than those offered by other airlines. Jiang and Zografos (2021) regarded that green service fairness constitutes an important criterion for allocating scarce resources among self-interested practitioners. Many consumers will evaluate green products to determine whether the true value of the green offerings is justifiable concerning their inputs to acquire the service. Therefore, investigation of fairness perception will provide a new theoretical direction to confirm consumers' behavioral response to green service offerings and their satisfaction in purchasing green products . To the best of our knowledge, the linkage has not been investigated in the sustainable airline field. Therefore, we propose the following: H6: Green service fairness has a positive influence on green experiential satisfaction.
H7: Green service fairness has a negative influence on airline switching intention.
Green Alternative Attractiveness, Experiential Satisfaction, and Green Airline Switching Intention
When consumers balance the alternatives of price, value, service, quality, or other essential elements and conclude that other airlines have better performance, green alternative attractiveness is formed (Han, 2015). However, when superior competition is lacking, consumers might not have any other choice: to stay or leave (Han, 2015). In other words, when consumers in this area, who are knowledgeable, emphasize green (higher demand rate) and green airlines are in an oligopoly monopolized market (few viable alternatives), the switching intention may decline.
Ortegon-Cortazar (2019) offered a list of alternative attractiveness factors based on a multidimensional analysis of eco-natural resources in malls, including, but not limited to, access to the malls, the variety of offerings, clients, the physical design of the malls, luxurious feeling, and eco-natural environment. Customers are regularly attracted to strong alternatives, particularly when they perceive the relative merit of competing with the alternative's price, value, location, service, or quality. According to Sharma and Peterson (2000), customers are likely to terminate an existing relationship with a service provider and go to a new provider when they perceive that the alternative is more attractive. Thus, customers switching to the other service providers is expected in exchange for positive service, price, and image (Kim et al., 2004). Nagengast et al. (2014) confirmed that alternatives lacking attractiveness will lead to switching covered moderating nature, especially for the relationship between repurchase intention. Indeed, the result from switching to a potentially more satisfying alternative might be weakened by enhancing the switching costs, and increasing individuals' perceived level of switching costs (e.g., reducing alternatives' attractiveness) and are thus likely to undermine the satisfaction repurchase intention link. Han (2015) identified that the relationships between guests' pro-environmental intention for green hotels and their direct predictors are under the influence of their perceived level of the attractiveness of non-green alternatives. The result confirmed that customers' perceived nongreen alternatives are less attractive than green lodging products. Han (2015) utilized the extended TPB model to investigate the moderating effect of non-green alternative attractiveness. The author found that the alternative is less attractive when consumers consider their attitude, perceived behavioral control, and moral obligation. noted that because of green convention attendees, green alternative attractiveness has a significant effect on green switching intention. However, the relationship between green alternative attractiveness and green experiential satisfaction is not empirically supported by consumers. This relationship cannot represent green airlines. Therefore, we propose the following: H8: Green alternative attractiveness has a positive influence on green experiential satisfaction.
H9: Green alternative attractiveness has a positive influence on airline switching intention.
RESEARCH METHODOLOGY
The study systematically provided an overview of the previous literature to determine the proper items suitable for research problems and research objects. The participants were informed clearly about the research objectives and expected outcomes. If a participant did not complete the survey, then the data was not used. Kovaova and Lewis (2021) mentioned that any survey that did not reach greater than 50% of completion should be removed from the subsequent analysis to ensure quality. Quantitative research was used as an appropriate method in analyzing the conceptual model to examine our hypothesis, and the rationality of the hypothesis was verified through data collection and analysis. The descriptive statistics of the questionnaire items were measured by a seven-point Likert scale, "1" indicating "strongly disagree" and "7" indicating "strongly agree." Each construct was measured using multiple measurement items (Churchill, 1979). The previous study verified that three items or more to represent each construct present a more reliable result (Baker and Crompton, 2000). In this study, the measurement of the psychological benefit of greenness (with warm glow, self-expressive benefits, and nature experience dimensions) was based on Hwang and Choi (2017), Lin et al. (2017a). The seven questions to assess the green corporate image were derived from Hwang and Choi (2017) To avoid the difficulties caused by improper design in the formal survey, a preliminary test of the questionnaire should be conducted before the formal survey. Through the preliminary modification of the questionnaire, the accuracy of the study was improved, and the questionnaire could be distributed to the target population. We could also assess the accuracy and inertia of the possible responses. As this study is based on the people who have taken green aviation and have the basic knowledge of green aviation, the survey scope is extremely wide.
In the preliminary test, the researchers sent questionnaires to industry practitioners with green knowledge using an online platform. The main reasons for selecting them as the prediction object are as follows: First, industry practitioners have a basic concept of what green airlines are, and they can provide some advice for improving the questionnaire to minimize the bias, which can enhance the effectiveness of the questionnaire. The original questions were in English. Therefore, a bilingual expert was invited to check the translated questionnaire to ensure its validity. Back translation was adopted to increase credibility.
The study was initially designed with a 37-item questionnaire. The ratio of the item to the number of pre-testers was approximately 1:5, which is most suitable to ensure the recovery rate (Wu et al., 2014). According to this ratio, 200 copies were finally collected, and 185 valid questionnaires were collected with an effective recovery rate of 92.5% to ensure research quality (Kovaova and Lewis, 2021).
Factor analysis is needed before determining the questionnaire, which helps determine whether the dimensions can be empirically verified. Generally, exploratory factor analysis is used for verification. Through the exploratory factor analysis of 37 items, four items were eliminated (i.e., "Overall, I am happy with this eco-friendly airline because it is environmentally friendly;" "I feel like a superior consumer when I choose an ecofriendly airline;" "With an eco-friendly airline, people around me can observe that I am aware of ecological development;" and "I have already changed eco-airlines several times.").
As a result, 33 items were retained. The convenience sampling method and online survey were used as a tool to collect the target participants. For the received data, partial least squares structural equation modeling (PLS-SEM) through SmartPLS 3.3.3 software was used to conduct confirmatory factor analysis (CFA) and hypothesis testing.
RESULTS AND DISCUSSION
The selected participants belong to China and are over 18 years who have been choosing green flights. To be sure of the reliability and accuracy of the result, the participants went through a rigorous verification process with two filtering questions to ensure that they could answer the questions (Kovaova and Lewis, 2021). The online questionnaire was uploaded to the Wenjuanxing platform via a WeChat group (one of the popular social media platforms in China) and was sent to the participants. Two filtering questions were asked to see whether they had knowledge about green flights and had traveled with green flights before. If they answered yes on both questions, they could continue with the questionnaire; otherwise, the survey was to end. The parameters of the study were clearly described before they started answering the questionnaire, and a pilot test could ensure the clarity of the survey. The participants were required to fill up the questionnaire that had listed a series of questions that influence green airline experience based on passengers' psychological behavior, brand, experiential satisfaction, airline alternatives, service fairness, and airline switching intention.
Data collection was completed within 3 months, from April to July 2021. Given that many cities were in lockdown due to the Corona Virus Disease (COVID-19) pandemic, we adopted an online questionnaire to ensure a safe environment and minimize the spread. We provided the questionnaire link and sent it to the respondents, and we adopted convenience sampling for data collection.
A total of 684 questionnaires was received in this study, of which 615 were valid after incomplete information was removed. The response rate was 89.91%. Li (2016) recommended at least 100 samples for data analysis. When the sample size was greater than 200, the analysis results were better. When determining the sample size according to the observed variables, the ratio of the observed variables to the sample size should be between 1:10 and 1:15. A total of 33 variables were observed in this study. The acceptable sample size should be above 330. The samples collected in this study were larger than the minimum sample size of 330.
In the data statistics of the respondents, the males accounted for 46.18%, and females accounted for 53.82%. Most of the respondents were aged between 18 and 29, accounting for 54.31%, with a diploma or bachelor's degree (74.8%). In addition, the interviewees were mainly students (39.51%), and took green flights one to two times, accounting for 57.72%. Table 1 shows the details of the demographics of the respondents.
Confirmatory Factor Analysis
In this study, to confirm the reliability and validity of the scales, we tested the measurement model fit by conducting CFA using PLS-SEM (SmartPLS). The SmartPLS 3.3.3 software was used to perform CFA, which was used to test discriminant validity. Table 2 shows the CFA results of this study. In this study, Cronbach's alpha was used as the standard for internal consistency reliability. Cronbach's alpha was used to calculate the correlation for each path (Choi and Sirakaya, 2006). George and Mallery (2003) indicated that according to the law of alpha reliability, whether the alpha is greater than 0.7 can prove whether the items in the scale are reliable.
According to the operating results of the SmartPLS 3.3.3 software, Cronbach's alpha values of the eight constructs ranged from 0.930 to 0.966, all exceeding 0.7. The result indicates that the investigated constructs had internal consistency reliability. Composite reliability (CR) is concerned with the internal consistency of the composite factors involving multiple items (Fornell and Larcker, 1981a). CR was used for the reliability test, and the minimum value of CR should be higher than 0.70 (Hair et al., 2010). In this study, the CR values of the eight constructs ranged from 0.955 to 0.972. The reliability test of this study was qualified.
Factor loadings (FL > 0.7) and the average variance extracted (AVE > 0.50) for all items evaluated the effectiveness of the convergence (Fornell and Larcker, 1981a). The FL test was used to determine the measurement validity of the project. Table 2 shows that 33 items of FL ranged from 0.898 to 0.957, which were more than the 0.5 standards. The AVE values of the eight constructs ranged from 0.831 to 0.889, which were greater than 0.50. Moreover, all exceeded the threshold of 0.50 for convergence validity (Fornell and Larcker, 1981a).
In addition, discriminant validity tested the correlation among the square root of AVE (Fornell and Larcker, 1981b). A larger variance of latent variables should be observed in this test. Table 3 presents the inter-construct correlations of the matrix. Correspondingly, all the correlations have satisfied the result. All the correlations imply that the hypothesized measurement model is reliable and valid in structural relations.
Hypothesis Testing
In this study, SmartPLS 3.3.3 software was used, and PLS-SEM was used to build a structural equation model to test the research hypothesis. The Bootstrapping method was used to ensure the stability of the results. Table 4 shows the results of the hypothesis test. The results show that warm glow has a significantly positive effect on the airline's corporate image (β = 0.543, p < 0.01). Therefore, Hypothesis 1 is supported. The relationships between selfexpressive benefits and green corporate image (β = 0.085, p > 0.05) and nature experience and green corporate image (β = 0.026, p > 0.05) are not supported in this study. Therefore, Hypotheses 2-3 are not valid. Hypothesis 4 is supported based on the green corporate image that has a significant and positive influence on green experiential satisfaction (β = 0.116, p < 0.05).
Green experiential satisfaction has a significantly positive effect on green switching intention (β = 0.173, p < 0.01). Therefore, Hypothesis 5 is valid. Green service fairness has a significant and positive effect on green experiential satisfaction (β = 0.168, p < 0.01) but fails on linking airline switching intention (p > 0.05). Therefore, Hypothesis 6 is supported, but Hypothesis 7 is not. The airline alternative attractiveness has a significantly positive effect on green experiential satisfaction (β = 0.161, p < 0.01) and green switching intention (β = 0.245, p < 0.01). The result verifies Hypotheses 8 and 9.
Effects of Green Psychological Benefits on Airline Corporate Image
The three psychological benefit determinants, namely warm glow, self-expressive benefit, and nature experience, are widely used to examine the customers' spiritual comfort of using eco-friendly products or services (Hwang et al., 2019). Starting with these determinants, this study examined their associated effects on the consumers' viewpoint of green airline image.
Based on the statistics, warm glow is the most significant construct of airline corporate image (Hypothesis 1). This result is also supported by previous studies (Hwang and Choi, 2017). Lin et al. (2017b) also stated that warm glow benefits can affect the green consumers' perceived value, which has proved to have a strong connection between customers and their green brands. However, as noted earlier, self-expressive benefit and nature experience will not be relearned in the corporate image (Hypotheses 2-3), which does not match the managerial outcome (Hwang and Choi, 2017;Lin et al., 2017a). The results imply that customers might perceive a positive image when they regard themselves as doing the right thing (contributing to the environment). Different from displaying a positive characteristic, a cluster that consumes green aviation is more likely to lay on selfaccomplishment (e.g., social responsibility, green brand loyalty, and contribution of pollution abatement). On the contrary, a cluster pays less attention to social belongings or esteem needs when choosing green flight as transportation, such as congruence of self-image appreciation or enjoyment of nature experience. Moreover, our target respondents were Chinese customers who are culturally different from the non-Chinese segment. Their buying decisions are based on low individualism. They are seldom concerned about their self-expressive behavior than the customers who belong to groups that look after each other in exchange for loyalty (Huang and Crotts, 2019).
Nature experience does not support airline corporate image, which agrees with the result of Jamrozy and Lawonk (2017), who indicated that customer purchase intention is up to money worth experience. Today, a highly competitive environment, customer concern price, quality, and value can explain the outcome of the hypotheses that are not supported in relation to nature experience and organization image.
Effect of Airline Corporate Image on Green Experiential Satisfaction and Switching Behavior
The data analysis reveals that airline corporate image positively influences green experiential satisfaction, and experiential satisfaction further declines airline switching intention . This notion concurs with the proposition of . To some extent, experiential satisfaction reflects as a key element for maintaining the existing consumers. Furthermore, the developed corporate image (e.g., professions on developing an environmental reputation and reliable ecofriendly products and services) can reduce the costumers' switching intention of green brands.
Relationship Among Green Experiential Satisfaction, Green Service Fairness, Green Alternative Attractiveness, and Airline Switching Intention Perceived by Airline Customers The third research objective is partially supported. The results indicate that green alternative attractiveness has a significant and positive influence on the airline switching intention (Hypothesis 9). However, green service fairness cannot affect airline switching intention directly (Hypothesis 7). Yieh et al. (2007) highlighted that when a customer perceives the fairness of the price given by the service provider, positive feelings toward the service provider will gradually develop the buying decision. Studies have found that price is a crucial factor for customers satisfaction and loyalty. Therefore, price fairness may determine the customers' switching intention and come up with loyalty decisions, although they have a relevant green fairness service. The business has become more competitive, customers' buying decisions are made with corporate brand image, given that the marketer might come up with a green promotion image, which may lead to switching intention. Therefore, a cost-effective flight offering experience satisfaction will ultimately prevail in the rival market.
CONCLUSION
This study aims to find the effect of passengers' perceptions of the green psychological benefits (by examining three dimensions: warm glow, self-expressive benefits, and nature experience), green service fairness, and green alternative attractiveness on its outcome variables in the green Airlines industry. More specifically, this study proposes that the psychological benefit (warm glow, self-expressive benefit, and nature experience) of greenness can affect the airline's corporate image. In addition, the psychological benefit continues to affect passengers' green experiential satisfaction, which can eliminate negative intention on airline switching. Meanwhile, this study proposes that green service fairness and green alternative attractiveness have a negative influence on airline switching intention toward green experiential satisfaction. Nine hypotheses were developed from the theoretical relationship among the proposed constructs. The data analysis result includes theoretical and practical implications for stakeholders as follows.
Theoretical Implications
First, the data analysis indicates that warm glow (Hypothesis 1) is the main driver of the airline's corporate image. The result supports the previous studies in diverse industries (Hwang and Choi, 2017;Lin et al., 2017b). However, different from previous studies, self-expressive benefit (Hypothesis 2) and nature experience (Hypothesis 3) cannot support the airline corporate image. Although previous studies have confirmed the relationship in a similar field, the result of the current study differs from previous evidence. Utilizing self-expressive benefits and nature experiences for marketing campaigns to improve airline corporate image does not work in China (Lin et al., 2017a,b). As previously mentioned, the Chinese culture reflects low individualism as a seldom concern of self-expressive behavior rather than customer belonging to groups that look after each other in exchange for loyalty. The result implies that not all psychological benefits attract passengers' notion on airline corporate image. Based on this scenario, the psychological benefit of green carrier choices does not present a self-interested motivation. In contrast, self-achievement when flying with a green airline, can improve the image of an airline organization. One possible explanation is that those who choose green airlines in the market segment in China are those who are concerned about green issues. The main reason for passengers flying via a green airline is self-achievement. Another possible reason is that with the development of transportation networks and living standards, people are not required to verbally promise to support environmental protection when they can protect the environment. Thus, the social norm is barely perceptible, and nature experience is not the core benefit for purposeful pursuit. Second, this study confirms that the airline corporate image supports passengers' green experiential satisfaction (Hypothesis 4), and green experiential satisfaction can decline the airline switching intention (Hypothesis 5). The importance of green experiential satisfaction and the decline of switching intention have been consistently emphasized in the veracious field. However, applying the existing theoretical concept to a relatively unfamiliar field can improve its reliability and validity (Chen, 2010). Correspondingly, the practical implication in different areas can achieve a more reflective interrelation.
Third, green service fairness (Hypothesis 6) and green alternative attractiveness (Hypothesis 8) have a direct influence on green experiential satisfaction. Green service fairness and green alternative attractiveness can affect airline switching intention through green experiential satisfaction. Green alternative attractiveness can also influence switching intention directly (Hypothesis 9). However, green service fairness fails to affect airline switching intention (Hypothesis 7). To the best of our knowledge, correlations among green experiential satisfaction, green switching intention, green alternative attractiveness, and green service fairness are hardly investigated in the airline industry. The study coincides with the proposition of who investigated correlations in the green convention. The results may not be accidental. From these relationships, green service can generate the passengers' satisfaction with a green flight experience. However, green service fairness cannot directly affect the passengers' switching intention. In this notion, one possible reason is that green service fairness can strengthen the green flight experience. However, choosing whether they will switch to another airline is not the core value for passengers. stated that those treated unfairly by a green institution may not likely become morally outraged. However, we argue that the participants in our study might not have the associated experience of unfair service treatment. Service fairness, to some extent, is regarded as the service standard for the flight journey. Passengers do not think that service fairness is the reason for them to consider switching. However, service fairness might be one of the core elements for them to evaluate satisfaction of the flight journey.
Practical Implications
The findings have several managerial implications. First, the psychological benefit of warm glow can decrease consumer switching intention, formulation, and excursion of selfenhancement slogan or other communication strategies. Thus, enhancing the consumers' warm glow might enhance the corporation image and green experiential satisfaction, thereby, reducing the probability of switching behavior. Hence, brand innovation in the Chinese green airline market should reconsider building their green brand image that can evoke the consumers' feelings of nature connectedness and moral obligation. On the contrary, marketing should pay less attention to motivating consumers on self-expressive benefits and nature experiences in China and proportionally reduce utilitarian environmental benefits. Lin and Zhou (2020) confirmed the marketing strategy and stated that "utilitarian environmental benefit is evident in the branding of physical goods but fails to support the branding of services." Furthermore, green experiential satisfaction can help green service fairness and green alternative attractiveness to contain airline switching intention. Accordingly, green alternative attractiveness can affect airline switching intention directly but not service fairness. Marketers should keep in mind that green service fairness is an essential element for green experiential satisfaction. However, service fairness is not a determinant of consumer switching behavior, because in a competitive environment, pricing is one of the concerns, although the image is green. This notion does not mean that marketers should not pay attention to service fairness. The reason is that once the passengers are unsatisfied because of insufficient demand or unfair service, the disappointment of the flight will conquer satisfaction toward the flight journey, thereby, increasing the consumers' intention to switch. On the other hand, the competitors of green airlines might provide them with better experiential quality, and they may thus switch to another airline. Notably, in this study, the airline corporations were not limited to the green operation mode. Accordingly, the green airline management should improve the dimensions of experiential quality to allow the consumers to choose green companies and create green loyalty of green airlines.
Lastly, airlines need to improve corporation image and green experiential satisfaction which will result in not only enhancing the brand image but could also reduce the probability of switching behavior as well as establish an eco-surplus culture for the customers.
Limitation and Future Research
This study has several limitations. First, this study focuses on green marketing constructs, and the relationships are examined in the comprehensive theoretical framework. Other potential green marketing constructs or relationships that are important may have been neglected in the theoretical framework. Future researchers may extend the current theoretical framework and examine whether other potential relationships exist, apart from those identified in this study, in various service industries or other countries.
Second, the data collection was conducted during the COVID-19 pandemic period. The research method was also only limited to a quantitative approach via a questionnaire. Moreover, to implement social distancing, we could only reach the participants via an online platform. Adopting a mixed-method approach could have been better to minimize the bias of the result.
Third, we only focused on Chinese passengers, which may not be generalizable to other geographical regions in other countries. Hence, future studies should collect samples from different nations to validate the generalizability of our research model. Future studies could also perform a cross-national analysis on the model of this study to determine if the subjects of different nations would generate different results.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
SC designed the topic. XZ and YW collected the data and wrote the manuscript. ZL collected the data and reviewed the literature. All authors contributed to the article and approved the submitted version.
FUNDING
This study was supported by Macau Foundation Grant. | 2022-04-28T02:20:53.506Z | 2022-04-27T00:00:00.000 | {
"year": 2022,
"sha1": "1923dc0eb11a97abe385113c2c69da47c2c7da78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "1923dc0eb11a97abe385113c2c69da47c2c7da78",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231437722 | pes2o/s2orc | v3-fos-license | Aspergillus fumigatus AR04 obeys Arrhenius' rule in cultivation temperature shifts from 30 to 40°C
Carbon dioxide release rate increased from 0.37 per hour to 0.69 per hour when batch cultures of the ascomycete Aspergillus fumigatus AR04 were compared at 30 and 40° C. This rate increase indicates an approximation to unlimited growth in a stirred vessel.
Introduction
The first example for the replacement of a classical chemical by a microbial process in kt year -1 scale is the production of vitamin B 2 . Interestingly, a bacterial and two fungal processes were developed and brought to the market by three competing companies more than 20 years ago (Stahmann et al., 2000). One reason for the economic success of the microbial processes in comparison with chemical synthesis, which was applied for decades, is that the conversion takes place in a single vessel. Therefore, they are one-step processes. The chemical processes consisted of five or more steps.
An unanswered question is why the bacterial process did not replace both fungal systems yet. Possible answers are that (i) the three systems were never compared under biased conditions in the same laboratory or (ii) all systems have advantages, e.g. the fungal production might lead to higher product titres, but the bacterial system is faster. Being faster is a criterium important for the systematic comparison of so-called microbial chassis systems (Calero and Nickel 2019). The naive expectation that the process using the filamentous fungus Ashbya gossypii might be the slowest and therefore not competitive with Candida famata growing with a yeast phenotype was wrong. Unexpectedly, Ashbya gossypii is competitive with Bacillus subtilis (Hohmann and Stahmann 2010). Attempts to develop A. gossypii as general production platform for other products e.g. recombinant lipid (Ledesma-Amaro et al., 2015) or inosine (Ledesma-Amaro et al., 2016) worked in the research laboratory, but are not applied yet. The opposite is true for Bacillus subtilis. Long before the term chassis microorganism was introduced to the literature, Bacillus subtilis was already applied in different fields. Today, B. subtilis strains excrete the fine chemical D-ribose to concentrations of more than 100 g per litre (Cheng et al., 2017). The production of the high-value polysaccharide hyaluronic acid by a membrane-associated enzyme heterologous (Westbrook et al., 2018) is also economically competitive. Even proteins, e.g. at least a serine and a metalloprotease are produced with titres more than 20 g protein per litre (Contesini et al., 2018).
Bacillus subtilis can grow faster than three doublings per hour. The filamentous hemiascomycete Ashbya gossypii is one order of magnitude slower. In chemostat experiments, dilution rates of 0.3 h -1 were possible which means that the fungus grew fast enough to avoid a washing out (Stahmann et al., 2001).
Other disadvantages of Ashbya gossypii are its need of complex nutrients i.e. yeast extract and weak tolerance against low pH. Recently, these disadvantages were overcome by Phialemonium curvatum (Barig et al., 2011) growing in 100-litre plastic vessels in selective minimal medium. As reported recently (Barig et al., 2017;Zamani et al., 2020), different Aspergillus species as well as P. curvatum using crude palm oil as the sole source of carbon and energy were found to have an omnipotent anabolism.
Unlike filamentous fungi, baker's yeast has been utilized by mankind since 3000 BC through the discovery of ancient drawing in Egypt which described the wine processing and food fermentation (El-Gendy, 1983). Baker's yeast, Saccharomyces cerevisiae, was known for its usage not limited in baking industry but also in ethanol production, heterologous proteins expression and as supplementary component in microbial medium preparation. To date, the highest recorded baker's yeast growth was 0.47 h -1 , using batch cultivation (Salari and Salari, 2017). A study using continuous culture (chemostat) showed steady states at dilution rates of 0.44 h −1 (Paalme et al., 1997). At a growth rate beyond 0.28 h −1 S. cerevisiae was found to start ethanol production (Van Hoek et al., 1998).
This study was performed to set a benchmark for growth of filamentous fungi. If the nutritive conditions are not limiting, an exponential increase is expected. A shift in temperature might reveal that even a complex eukaryotic system follows a simple thumb rule like Arrhenius' equation.
Results
Compost isolate Aspergillus fumigatus AR04 grew faster and at higher temperature than reference strains By the method reported previously (Barig et al., 2011), A. fumigatus AR04 had been isolated from compost on mineral salts medium (MSM) with crude palm oil (CPO) as sole source of carbon and energy. It had been the only fungus isolated at 50°C. In this study, its growth rate was compared with four different Aspergilli ordered from strain collections (Table 1) in 3°C steps manner, which could be managed by ordinary incubation chambers, controlled by a pulsed temperature controller. Growth rates were then determined between 28°C and 52°C. While the A. niger and A. oryzae reference strains did not grow at ≥ 43°C, A. fumigatus AR04 and two A. fumigatus reference strains were found to grow well at 43°C, 46°C and 49°C but not at 52°C (Fig. 1A). Interestingly, A. fumigatus AR04 grew faster at all temperatures than all reference strains in mineral salts medium where a maximum of 420 µm radial growth per hour was calculated at 40°C (Fig. 1B). In YEPD medium, AR04 was able to show a radial growth up to 550 µm per hour (Fig. 1C). Striking was the relative advantage at 49°C, close to the temperature (50°C) used for its isolation. While AR04 grew 22% faster than ATCC46645 at the optimal temperature of 40°C, at 49°C it grew 106% faster (Fig. 1C).
Colony growth rates of Aspergillus fumigatus AR04 were > 100% higher on agar plates in comparison with Ashbya gossypii ATCC 10895 A. fumigatus AR04 showed a higher radial growth rate when compared in a two-step temperature experiment with A. gossypii. While the latter's rate went down to 50% from 28°C to 40°C, the isolates' growth rate increased from 210% to 370%. More strikingly, AR04 was found to grow faster even on MSM with CPO at 28°C than A. gossypii in complex medium with glucose (Table 2). No growth was observed with A. gossypii when yeast extract was replaced with mineral salts.
Chemostat and batch cultivation of Aspergillus fumigatus AR04 on minimal and complex glucose media showed the anabolic performance at 40°C Growth rates of colonies on agar plates are easy to detect but cannot be compared with growth rates of submerged cultures. To get convincing data, chemostat experiments at high dilution rate and high stirring velocity were performed. High dilution rates lead to low biomass concentrations and therefore minimize gas exchange limitations. High stirring velocities avoid pellet formation (adherance) and immobilisation (coherance). Under such conditions and at high glucose and low yeast extract conditions, steady state had been adjusted for A. gossypii. At 28°C and a dilution rate of 0.32 per hour a concentration of 0.91 g per litre had been determined (Stahmann et al., 2001). Now, 40°C was used with Aspergillus fumigatus AR04 and four times more mycelial biomass was found at a dilution rate of 0.3 per hour (Table 3). An increase of dilution rate to 0.5 h -1 led to the expected decrease in stationary biomass concentration and an increase in remaining glucose. Adequate changes were observed with a dilution rate of 0.7 h -1 .
Rates of carbon dioxide release in the exhaust reached 0.7 h -1 in exponentially growing batch cultures In chemostatic culture, adaptations to high dilutions rates occur within hours. To measure anabolic performance at unlimited nutritional conditions, high glucose plus high yeast extract concentrations were added to the mineral salts. To minimize changes, e.g. in culture volume, no (C) The latter strains were also compared on rich medium (HA) to minimize growth limitations caused by complex biosynthetic pathways. Mean values were obtained from three independent experiments, and standard error was calculated but is mostly not visible due to the small deviation. samples were taken. Instead, carbon dioxide concentration was detected online in the exhaust gas stream. Typical results are shown in Fig. 2. To minimize diffusion limitations, e.g. by pellets, conidia were used for inoculation of the pre-culture and each run was stopped after 200 min when biomass concentration was not higher than 1 g l -1 . Carbon dioxide increase rates were determined by logarithm of the original data ( Fig. 2, inserts). To make sure that glucose was not limiting 20 g l -1 were used as starting concentration so that more than 10 g l -1 were left when the experiment was finished (Table 4). Independent runs at 30°C and 40°C showed a scattering of > 20%. But, the 10°C step was sufficient to suggest a doubling of the carbon dioxide release rates from 0.36 to 0.69 per hour as expected from the Arrhenius rule of thumb (Table 4). Macroscopic and microscopic morphology at inoculation with the pre-culture and after the 200 min-experiments showed that hyphae were not attaching at the fermenter but forming spherical flocs (Fig. 3).
Discussion
The gold standard concerning growth rate of a filamentous fungus in this study was A. gossypii. Since a µ of 0.3 h -1 (Stahmann et al., 2001) was stable in chemostatic cultivation and the industrial riboflavin-producing competitor Bacillus subtilis is with 0.6 h -1 (Dauner et al., 2001) two times faster the question arose whether fungi can come closer to bacteria. Highest recorded µ for fungi with yeast phenotype in chemostatic cultivation had shown that Kluyveromyces marxianus and S. cerevisiae have growth rates of 0.49 -0.5 h -1 and 0.44 h -1 respectively (Paalme et al., 1997;Fonseca et al., 2007;Fonseca et al., 2013). With the compost isolate A. fumigatus AR04, a steady state was reached even at a dilution rate of 0.7 h -1 . To our best knowledge, all fungal chemostat experiments reported present growth rates below 0.5 h -1 (Table 5).
Chemostat growth rates are highly artificial since during steady state all concentrations are constant. In batch cultures, all concentrations change minute by minute. To get maximum rates, early batch cultivations with low biomass and excess of substrates were investigated over 200 min only. To exclude substrate limitations, nutrients were given in excess. To minimize diffusion barriers, high conidia concentrations were used as inoculum for the pre-culture. The short fermentation time avoided pellet formation. These conditions revealed carbon dioxide release rates between 0.34 and 0.48 h -1 . An up-shift from 30°C to 40°C showed the expected increase. If the Conditions: 1000 rpm, 5 l min -1 compressed air, 40°C, 15 ml h -1 antifoam. A 100 ml overnight pre-culture of A. fumigatus AR04 was used for inoculation. After 4 h, the system was switched to continuous cultivation over night with a dilution rate (D) of 0.3 h -1 . Next morning, the dilution rate was either kept constant at D = 0.3 h -1 or increased to a rate of D = 0.5 or 0.7 h -1 . After a minimum of four volumes was exchanged, three samples at an interval of 1 h were taken. Dry biomass, concentration of carbon dioxide in the gas exhaust and concentration of glucose were determined. Data origin from three representative runs. carbon dioxide production is assumed to be proportional to the biomass generating that carbon dioxide efflux, growth rate increase comes close to Arrhenius thumb rule.
When the temperature was shifted from 28 to 40°C a decrease of colony growth was observed for A. gossypii. The opposite was true for A. fumigatus AR04. A model describing a relation between temperature and chemical reaction velocity is the Arrhenius equation.
The Arrhenius model (Equation 1) simplifies rate constant k as product of pre-exponential factor and Euler's number e (exp) to the power of activation energy E a divided by the product of gas constant R and temperature T. If E a is assumed as 50 kJ mol -1 , the value of R is 8.31 J K -1 mol -1 and 303 K (30°C) the temperature, the value of the fraction will be 2.38 x 10 -9 . When the temperature T increases to 313 K (40°C) the value of the fraction becomes 4.48 x 10 -9 . If the value at 30°C is set to 100% the model predicts an increase to 188%. If the mean values of the calculated carbon dioxide release rates are compared an increase to 192% was observed.
Depending on E a (Fig. 4), this is a change that fits to the theoretical model. In a recent study by Alvarez et al. (2018), 33 enzymes were compared concerning E a needed for their specific reaction. A range between 17 kJ mol -1 and 88 kJ mol -1 was found. The hypothesis that a lower temperature growth optimum (T Growth ) of the hosting microorganism leads to the evolution of enzymes pulling down E a seems at least to be true for the α-glucosidases of S. cerevisiae (T Growth = 28°C; E a = 71 kJ mol -1 ; Lee et al., 2007) and Thermus aquaticus (T Growth = 70°C; E a = 88 kJ mol -1 ; Lee et al., 2007).
The Arrhenius model was originally developed for chemical reactions. Biochemical systems of high complexity are rarely investigated. But, recently the model was used to explain the maximum ethanol production rate of Kluveromyces marxianus at 43°C. Interestingly, growth rate decreased when cultivation temperature was increased from 30°C to 48°C (Olaoye et al., 2018). Growth kinetics of Listeria monocytogenes, a Gram-positive bacterium causing food-born human infections, were studied in unsalted and salted (3%) salmon roe. Growth curves at temperatures between 5°and 30°C fitted partly to the Arrhenius model (Li et al., 2016).
Currently, artificial genome reduction and chromosome synthesis are performed to gain both, an understanding of a system reduced in complexity as well as a cell factory equipped with a minimum of structure and an optimum of function (Dai et al, 2018). There are no reports published that growth rate increased for Bacillus subtilis after genome reduction. A temperature shift as performed in this study might become a tool to investigate whether metabolic limitations determine rates for CO 2 , growth, or production.
Metabolic limitations for filamentous fungi are caused by the formation of macromorphologies, so-called pellets, dense spherical structures observed in submerged cultures. Calculations of diffusion rates indicated that the growth-limiting nutrient will almost inevitably be oxygen if air is supplied (Pirt, 1966). The presented study avoids these pellets by three measures: (i) inoculation of the preculture with conidia, (ii) short term culture and (iii) high A 7-l fermenter was run for 4 h with four litre rich medium 2HA-MS made of 20 g l -1 yeast extract, 20 g l -1 glucose plus mineral salts agitation velocity. In pellets, gas exchange as well as transport of substrates and products is hindered by pseudo-tissue. In fine-dispersed mycelia observed here each hyphal filament is reached by the convection of the stirred medium. This might be easier than an optimization of diffusion inside of a pellet (Schmideder et al., 2019).
The highest growth rate published for A. fumigatus ATCC 46645 in a stirred vessel is 0.25 h -1 at 37°C (Vödisch et al., 2011). That is less than 40% determined here at 40°C. But, Vönisch et al. used minimal medium and stirring was 550 rpm only. Their goal was not to determine high growth rates. Instead, limiting conditions were used to compare proteomes under hypoxic and normoxic conditions. Additionally, the data presented in Fig. 2B and C show that A. fumigatus AR04 grows faster than A. fumigatus ATCC 46645.
The highest growth rate of a filamentous fungus in a submerged batch culture was reported for Thermomyces lanuginosus at 50°C (Jensen et al., 1993). But, if Arrhenius is true for this fungal species too at 40°C, only 50% of 0.84 h -1 that means 0.42 h -1 can be expected. The radial growth rate on agar plates was lower (370 µm h -1 ) than determined for A. fumigatus in this study at 40°C (600 µm h -1 ). Both comparisons are not fair. The first, because Jensen et al. determined submerged growth by light absorption that can be interfered by a change in pigmentation, the latter can be influenced by the diameter of the hyphae. That comparison between different species concerning absolute radial growth rate is difficult becomes clear looking at Neurospora crassa reported to grow more than 2,400 µm h -1 at 30°C (Steele and Trinci 1977).
The growth maximum of all three A. fumigatus strains tested in this study is above the basal temperature of Homo sapiens (36-37.8°C; Hasday et al., 2000). It rather fits to the febrile temperature (37.9-41°C; Hasday et al., 2000). Since all A. fumigatus strains tested here even grew at 49°C a negative effect of the increased temperature alone on the pathogens viability can be excluded. The same is true for most bacterial pathogens e.g. Staphylococcus aureus (Mackowiak, 1991). To evaluate the role of hyperthermia to human monocyte-derived dendritic cells, these were stimulated with germ tubes of A. fumigatus in vitro and found to become modulated in activation and function (Semmlinger et al., 2014). Two isolates of A. fumigatus recently isolated at International Space Station, which means highest environmental stress possible e.g. concerning irradiation, were shown to be both, stronger in pathogenicity in an animal experiment, and faster in colony growth rate on agar plates than reference strains (Knox et al., 2016). Therefore, the anabolic performance presented here will hardly convince a company to apply A. fumigatus for any biotechnical production. Even the risk of exposure to aerial conidia that can cause hypersensitivity reactions with more than 20 different allergens (Schubert et al., 2018) are a criterion for exclusion.
But, highly competitive markets plus modern genome editing techniques might gain more impact in the future. BioAmber, purchased by LCY Biotechnology Inc., a division of Taiwan-based LCY Chemical Corp., produced succinic acid using Pichia kudriavzevii, (current name: Issatchenkia orientalis, former anamorphic species: Candida krusei), a yeast isolated at pH 2.5-2.8 (Ahn et al., 2016). Low pH in organic acid production is preferred to harvest the undissociated acid instead of a less wished salt. C. krusei or better I. orientalis is a species that is intrinsically resistant to the antifungal drug fluconazole and responsible for about 3% of cases of candidemia associated with severe immunodeficiency like haematological malignancies/steam cell recipients, corticosteroid therapy and previous exposure to azoles in humans (Guinea, 2014;Antinori et al., 2016).
A. fumigatus is a saprophyte, which means it contains > 100 genes encoding enzymes for the degradation of plant material e.g. more than 10 encoding cellulases (Fang and Latgé, 2018). On the other hand, it is the most frequent cause of invasive aspergillosis in immunosuppressed individuals (Antinori et al., 2016). Virulence causing invasive aspergillosis has a multifactorial nature as it appears as complex interplay between host and > 10 microbial factors (Ben-Ami et al., 2010). Highly efficient CRISPR-mediated genome editing was shown (Zhang et al., 2016). Therefore, deletion of genes encoding enzymes of melanin biosynthesis, encoding transcription factors triggering production of secondary metabolites i.e. gliotoxin, or encoding extracellular proteases and siderophore synthesizing enzymes might result in non-pathogenic strains.
A non-pathogenic Aspergillus species also shown to beat the anabolic performance of Ashbya gossypii is A. oryzae. Since no growth was observed at 43, 46 and 49°C transfer of genes encoding heat shock proteins (hsp) from the tested A. fumigatus strains might lead to interesting mutants. A proteome comparison at 30 and 48°C revealed upregulation of 64 proteins, including 12 putative chaperones (Albrecht et al., 2010). A thermotolerance factor of unknown function, isolated as THT A gene by functional complementation of a temperaturesensitive mutant (Chang et al., 2004), was not seen in that proteome comparison. A less complex system, e.g. the microsporidium Nosema ceranae, adapted to honey bees as host, might be scientifically more straight forward, since only 1-5 hsp genes were identified as homologs of 2-20 genes in Saccharomyces cerevisiae (9) were measured (5). Gas (6) and medium (7) were filtered (0.2 µm). Carbon dioxide was measured (8)
Cultivation media
Rich medium (HA) was used for pre-cultures and the determination of colony growth rates, especially to determine optimal cultivation temperature. Its composition was 10 g yeast extract and 10 g glucose per litre. If necessary, 18 g agar per litre was added.
Radial growth rate determination
Growth optima of colonies growing on agar plates were determined using different media and temperatures. Therefore, fungi were pre-cultured in 100 ml HA medium in 500 ml shake flasks with two baffles over night at 28°C and 120 rpm. For inoculation, mycelium was scratched from the surface and transferred into 10 ml of 0.9% NaCl solution. Disintegration of mycelium with UltraTurrax (IKALabortechnik type T 25) at 13 500 rpm for 30 s allowed fine distribution of fungal cells. After incubation, over night mycelium was additionally disintegrated by UltraTurrax. Five microlitre of pre-culture was placed in the middle of a solid Petri dish containing MSM with CPO or HA medium. Petri dishes were cultivated at temperatures between 28 and 52°C for seven days, with the increase in diameter being determined every 24 h. The determined colony diameter was divided by two to calculate the radial increase.
Cultivation in a stirred vessel
Batch cultivation. Pre-culture was made from two 500 ml shake flasks with baffles filled with 100 ml batch cultivation medium 2HA-MS. Each flask was inoculated with 1.6 × 10 9 spores and was cultivated for 19 h at 120 rpm and 30 or 40°C resulting in 0.5 g dry biomass per flask.
The fermenter system LABFORS (Infors GmbH, Einsbach, Germany) was used with a 7-litre fermenter that has a double glass jacket for tempering with water, a disc stirrer with six stirring blades on two levels. With a Pt-100 sensor, temperature was detected. Exhaust air was cooled by a reflux condenser with a temperature of 10°C. From there, the air then flew via a bypass into the NDIR exhaust analyser. The device used was an EGAS-1 from B. Braun Biotech or BCP-CO 2 from BlueSens. The device was calibrated with compressed air and a 5% mixture of carbon dioxide and compressed air. The fermenter, autoclaved with water, was filled with 4 l 2HA-MS medium, freshly prepared but not sterilized. A temperature of 30 or 40°C was adjusted and inoculation started when temperature was reached. The fermenter was aerated with 3 l min -1 and was stirred with 400 rpm. The fermenter was controlled by the software Iris V5. Samples from pre-culture and main-culture were tested for contaminations.
Chemostatic cultivation. Five hundred millilitres of shake flasks with two baffles were used for pre-cultures. Flasks were filled with 100 ml HA medium. Mycelium was scraped from agar plates and was put into 10-15 ml HA medium. With an UltraTurrax (IKALabortechnik type T 25) at 13 500 rpm for 30 s, mycelium was disintegrated. Shake flasks were cultivated over night at 40°C and 120 rpm. Before the fermenter was inoculated, the preculture was disintegrated again.
For continuous cultivation, the same fermenter system was used as for batch cultivation (Fig. 5). It was aerated with 5 l min -1 , and the stirrer was set to 1000 rpm. The cultivation took place at 40°C. The fermenter was filled with 3 l minimal medium (MM) based on Monschau et al. (1998). It was filtered through a sterile filter (Whatman Polycap AS) and stored in a 20 l medium bottle. After inoculation with 100 ml pre-culture, the system was kept in batch for the first 4 h. Then, continuous cultivation started. During continuous cultivation, fresh medium was constantly pumped into the glass vessel while culture broth was removed. The mass of the filled vessel unit was kept constant by computer controlled action of the harvesting pump linked to the balance. Antifoam B emulsion (Sigma) was added constantly with 15.6 ml h -1 . In order to achieve steady state, at least four exchanges of reaction volume were passed. 100 ml samples were taken for biomass and glucose determination at steady state.
Glucose determination
After cultivation, the filtrate was used for glucose determination. All samples were frozen at −20°C and thawed for determination. They were measured as described in the D-Glucose UV test (R-Biopharm AG, Darmstadt, Germany) with a V-630 spectrophotometer (Jasco Deutschland GmbH, Pfungstadt, Germany). | 2021-01-11T06:22:57.750Z | 2021-01-09T00:00:00.000 | {
"year": 2021,
"sha1": "598197ac92f01e75d63ef7edc8193fd44a0c3751",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1751-7915.13739",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7530766a3a5a1a5a9e1f722b95630d116ae8205b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32738075 | pes2o/s2orc | v3-fos-license | Using mHealth Tools to Improve Rural Diabetes Care Guided by the Chronic Care Model
Background and objective: Used as an integrated tool, mHealth may improve the ability of healthcare providers in rural areas to provide care, improve access to care for underserved populations, and improve biophysical outcomes of care for persons with diabetes in rural, underserved populations. Our objective in this paper is to present an integrated review of the impact of mHealth interventions for community dwelling individuals with type two diabetes. Materials and methods: A literature search was performed using keywords in PubMed to identify research studies which mHealth technology was used as the intervention
Using mHealth Tools to Improve Rural Diabetes Care Guided by the Chronic Care Model
Mobile health (mHealth) is an emerging field that has been defined as "medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices" (Istepanian, Laxminarayan, & Pattichis, & 2006).In the United States, there is widespread use of mobile devices and access to broadband internet service is improving (Smith, 2010).Applications using mHealth devices are being developed to improve and augment the care of type 2 diabetes patients in the community (Katz R, 2012).However, careful attention to existing healthcare delivery structures must be http://dx.doi.org/10.14574/ojrnhc.v14i1.276considered during development of mHealth applications.Use of the validated Chronic Care Model will assist in successful and sustainable implementation of mHealth as a treatment option.
Background & Significance
Rural populations with low socioeconomic status are at higher risk of poor diabetes control, decreased self-management, and development of complications (Utz, 2008).There are 62 million Americans currently residing in rural areas (DeNavas-Walt, Proctor, & Smith, 2011) and it is estimated that 20 percent of this rural population is uninsured.Even with healthcare reform, this number is projected to increase to 25 percent by 2019 (Garrett, Loan, Headen, & Holahan, 2010).
In the United States, diabetes is most prevalent in the rural southeastern region (Barker, Kirtland, Gregg, Geiss, & Thompson, 2011).Nearly 12 percent of people in this region have diabetes compared to 8.5 percent in the remainder of the country.
Due to a lack of primary care providers in rural, underserved areas, there is a critical need for development and effectiveness testing of novel interventions that could improve health outcomes such as: effective patient-provider communication, adherence to treatment, selfmanagement ability, and biophysical outcomes.Achieving these improved outcomes must be done while allowing primary care providers to deliver culturally acceptable interventions that optimize time-efficiency and affordability (Barker et al., 2011).The ability of such interventions to improve care and reduce strain on rural healthcare practices will depend on the effective use of technology (Effken & Abbott, 2009).
Our objective in this paper is to present an integrated review of the impact of mHealth (Wagner, 1998).Disclaimer: The American College of Physicians is not responsible for the accuracy of the translation.This is done through six interrelated system changes meant to make patient-centered, evidencebased care easier to accomplish (Roger et al., 2012).The major concepts in the model are the health system, community support, self-management support, decision support, clinical information systems, and delivery system design (Pullicino et al., 2011).A prepared healthcare http://dx.doi.org/10.14574/ojrnhc.v14i1.276team delivering planned interactions, self-management support with effective use of community resources, integrated decision support, and supportive information technology (IT) are designed to work together to strengthen the provider-patient relationship and improve health outcomes (Pullicino et al., 2011).Therefore, the literature in this article will be presented based on the major concepts of the Chronic Care Model (See figure 1).
Methods
The PubMed data base was searched from late June to mid-August 2012.The following search terms were used: 'Diabetes AND mHealth' 'Diabetes AND Telemedicine' 'mHealth AND health disparities', 'mHealth AND Chronic Care Model' 'mHealth AND Clinical Information Systems'.Limits were set for articles that were published in the last five years, and published in the English language.The inclusion criteria were: (1) Studies which included participants with type 2 diabetes; (2) mHealth technology was used in the intervention (3) There was randomization of participants to intervention and control groups.
Literature reviews and State of the Science papers were reviewed for individual references, but not included in this review.A total of 157 articles were found.After examining the title, abstract and keywords of retrieved records, we identified 23 articles meeting the inclusion criteria.The articles were then reviewed via a matrix method and placed into categories based on the concepts of the Chronic Care Model.
Health System
Five articles were found that incorporated health system changes using mHealth interventions.Health system characteristics are traditional structure and process elements of organizations, such as size, ownership, skill mix, and technology.The health system http://dx.doi.org/10.14574/ojrnhc.v14i1.276characteristics are considered to directly affect and be affected by patient outcomes.The system characteristics mediate the relationship between patient characteristics and interventions in producing patient outcomes (Mitchell, Ferketich, & Jennings, 1998).
The system of interest to us is the rural healthcare delivery system.Compared with their urban counterparts, rural residents are more likely to be poor, be in fair or poor health, and have chronic conditions.Rural residents are less likely than their urban counterparts to receive recommended preventive services and on average report fewer visits to health care providers.
Uninsured, rural adults are more likely to report the following difficulties: access to care, referrals to specialists, and timeliness of care for an illness or injury (Agency for Healthcare Research and Quality, 2008).
In recent years, the United States through the Center for Medicare and Medicaid Services (CMS) and a number of private health plans has relied on the use of technology with disease specific registries to facilitate tracking and the provision of quality care (Muntner et al., 2011).
Diabetes is well suited to the use of clinical information technology and use of EMRs because its management is routinely characterized by easily quantifiable outcomes and process measures (Kleindorfer et al., 2011).
It is feasible to incorporate mHealth technologies into an existing Healthcare system (See Table 1).However, it is evident from this review that problems in mHealth technology use still Qualitative analysis shows that participants expressed frustrations with using the cell phones but liked the wireless system for collaborating with healthcare professionals and receiving automatic feedback on their blood sugar trends.
Community Resources & Policies
The Chronic Care model recognizes the influence of community on patient outcomes (Kabagambe et al., 2011).Patients traditionally seek health information in three ways: on their own, from professionals, and from friends and family (Ahern, Woods, Lightowler, Finley, & Houston, 2011).The use of technology does not change this pattern.Due to the ubiquitous nature of mobile devices and the internet, our idea of community has expanded from the traditional definition, people living in a particular area or place, to a much boarder network of social connections.Patients seek support from others with similar health concerns or conditions through lists-serves and social networks (Fox, 2011).Social networks bring peer support directly to patients without leaving one's home (Ahern et al., 2011).Therefore, current conceptualizations of community should include the on-line community, which can be defined as a network of individuals who interact through media, crossing geographical boundaries but united by a particular topic, interest, or goal.
Patients can now access health information, healthcare clinics, and providers through internet searches, secure e-mail, messaging, online medication refills, appointment requests, and secure patient access to electronic medical records (EMR) (Halanych et al., 2011;Judd et al.; http://dx.doi.org/10.14574/ojrnhc.v14i1.276Muntner et al., 2012;Wadley et al., 2011).The internet allows patients to quickly access vast amounts of disease specific information.Enhanced understanding of how patients seek health information may improve the way healthcare systems incorporate technology into the delivery of care.While an enormous amount of information is available with the click of a button, the quality of that material varies.
In the United States, there is widespread use of mobile devices and access to broadband internet service is improving (Smith, 2010).The accessibility of 3G service is available and reliable in the most densely populated areas of the United States.However, when considering implementing mHealth interventions in a rural population, 3G service is not always reliable.
Still, many of these areas have access to 1G and wired connections that could allow participation in mHealth interventions.It has been reported that even in the most rural areas of the United States, 77% of adults have a cell phone, which is only 10% less than more urban areas (Zickuhr, 2013).Six in ten adults (63%) go online wirelessly with one of these devices (Smith, 2010).
Self-management
Diabetes self-management includes mindfulness related to: eating habits, physical activity, monitoring blood glucose, medication taking, and communicating with healthcare professionals (Unverzagt et al., 2011).Evidence shows that patients who participate actively in their care achieve valuable and sustained improvement in physical and psychological well-being (Howard et al., 2011).The use of technology is making it possible to empower patients to learn new skills, enhance their self-management abilities, and structure personal care routines related to their illness (Kleindorfer et al., 2011).
Handheld devices can be used by patients and health care providers to support selfmanagement of diabetes.Through a phone and an internet site, patients can upload information Logan et al., 2007;Quinn et al., 2009;Soliman et al., 2011;Turner, Larsen, Tarassenko, Neil, & Farmer, 2009;Yoo et al., 2009;Zweifler et al., 2011).
Table2
Self-Management of Diabetes via mHealth Technologies.
Decision Support
Decision support is defined as embedding evidence-based guidelines into daily clinical practice and integrating the expertise of specialists into primary care practices (Kabagambe et al., 2011).A typical way of interacting with specialists is for primary care practices to send patients to specialist visits and hope to get a letter back in return.Through the use of technology, we can get beyond traditional referral letters to real-time consultation and exchanges with patients and providers in different locations.Primary care providers, specialists, care teams, and individual patients can benefit from problem or case-based learning, collaborating across geographical boundaries through the use of chat, voice, and video communications (Basoglu et al., 2012;Istepanian et al., 2009;Lyles et al., 2011;Rabin & Bock, 2011;Zolfaghari et al., 2009).These technologies will allow providers to jointly inform patients about guidelines and information http://dx.doi.org/10.14574/ojrnhc.v14i1.276pertinent to their care without lengthy waits between primary care visits and specialist appointments.This shift in the delivery of care allows for shared decision making and education between patients and the care team (Pullicino et al., 2011).This type of decision support will take a drastic change in the healthcare system.While mHealth tools have the potential to change practice, the authors could not find articles related to community dwelling type 2 diabetes patients and use of imbedded decision support.
Clinical Information Systems
Clinical information systems are used to collect, integrate and distribute information within the context of a healthcare setting (Pullicino et al., 2011).The extent to which these resources and services are available varies widely.While rural healthcare clinics are often the last to adopt such practice changes due to cost, there are several free Electronic Medical Record programs that can be incorporated into non-profit and free clinic settings.Integration of secure messaging, evisits, home monitoring with feedback, health-risk appraisal with feedback, medication refills, tailored interventions, social network services, and links to community programs is now possible (Ahern et al., 2011).A delivery system redesign is needed to develop patient-centered clinical information systems.These information systems can be incorporated, with little cost, into free clinic settings.
Delivery Redesign
Living in rural areas presents multiple barriers, one of which is limited access to care due to distance (Arcury, Preisser, Gesler, & Powers, 2005).Rural populations with low socioeconomic status have poor outcomes and the lack of primary care providers in rural, underserved areas demands a shift in healthcare practices.EMR is still needed within the context of the healthcare system redesign (See figure 2).
Discussion
Individuals with low socioeconomic status living in rural parts of the U.S. suffer disproportionately from poor health status, health disparities, and problems in accessing healthcare.The current rural healthcare system places the burden of caring for diabetes on patients and families who have very few resources.The cost of travel due to long distances between rural healthcare clinics and patients' homes frequently prevents patients from seeking needed healthcare.Mobile technologies are a promising approach to solving health disparities.
Used as an integrated tool and based on sound practice models such as the Chronic Care Model, mHealth may improve: the ability of healthcare providers in rural areas to provide care, access to care for underserved populations, and biophysical outcomes of care.Although individual interventions to impact outcomes for Diabetes patients using technology have been studied, no approach to date has used an integrated system of mHealth tools to deliver healthcare at a distance within existing rural health clinics.to manage a chronic condition, use of a health record to store personal health information, use of remote monitoring devices such as blood pressure monitors, glucometers, and scales, and seeking support from others with similar health concerns or conditions through social networks.
Conclusion
Using the validated Chronic Care Model to translate what is known about mHealth technology to clinical practice will assist in developing a model of healthcare delivery using mHealth technologies that is usable and meaningful to both patients and rural healthcare providers.A delivery system redesign using mHealth technology must incorporate live technical support, be easy for users, include face-to-face communications, have a lower cost to patients and rural providers than traditional interventions, and incorporate back-up interventions for technical issues that cannot be resolved in real time.This article supports ongoing research and implementation of a substantive departure from the status quo.Namely, the approach of integrating multiple mHealth tools into an existing rural health clinic to go beyond traditional office visits and shifting to real-time exchanges between patients and providers across geographical boundaries.
interventions for community dwelling individuals with type 2 diabetes.The review structure is based on the Chronic Care Model and a model of evidence-based healthcare delivery is proposed.Structuring what we know about mHealth technology using the concepts of the model http://dx.doi.org/10.14574/ojrnhc.v14i1.276adds clarity to the literature review and assists with translation to clinical practice.The Chronic Care Model has been used in clinical practice for over 12 years and is designed to assist healthcare practices to improve patient health outcomes by changing the routine delivery of care.
Figure 1 :
Figure 1:The Chronic Care Model exist and need consideration.Face-to-face communication, live technical support, and cost are found to affect use of mHealth tools by patients.Technical problems and difficulty of use increased the likelihood of patients stopping use of the mHealth technology and one study reported that telephone interventions were as likely to improve outcomes as mHealth interventions.Hence, developing a model of healthcare delivery using mHealth technologies http://dx.doi.org/10.14574/ojrnhc.v14i1.276must incorporate live technical support, be easy for users, include face-to-face communications, have a lower cost to patients than traditional interventions, and incorporate back-up interventions for technical issues that cannot be resolved in real time.
Table 1
mHealth and Health Systems http://dx.doi.org/10.14574/ojrnhc.v14i1.276 so that it can be interpreted by health care providers and patients can receive more immediate feedback.Technology allow patients to receive appointment reminders, education, and health behavior support, as well as measure glucose levels, blood pressure and weight and transmit this health information directly to data stores for clinical evaluation(Ãrsand, http://dx.doi.org/10.14574/ojrnhc.v14i1.276about their illness devices and the use of chat, voice, and video communications allows the healthcare team to provide many of the elements of a traditional office visit.A delivery system redesign is needed to develop patient-centered clinical information systems within the rural health care clinic setting.The use of innovative technology affords a low-cost, flexible means to supplement formal healthcare and is central in reshaping the care of rural populations.Developing a model of healthcare delivery using mHealth technologies should incorporate live technical support, be easy for users, include face-to-face communications, have a lower cost to patients and practices than traditional interventions and incorporate back-up interventions for technical issues that cannot be resolved.Provider approved educational content, social networking and access to Through the widely validated Chronic Care Model, it is possible to deliver care to patients in their homes in remote underserved areas.Bluetooth http://dx.doi.org/10.14574/ojrnhc.v14i1.276enabled Individual mHealth interventions have been http://dx.doi.org/10.14574/ojrnhc.v14i1.276found to improve outcomes, be cost effective, and culturally relevant.Examples of how technology has been used to improve outcomes include: patients seeking out health information via the web, access to services such as appointment scheduling and medication refills, communication with providers via secure messaging, engaging with computerized interventions | 2017-06-12T23:21:44.340Z | 2014-01-31T00:00:00.000 | {
"year": 2014,
"sha1": "4d340518efb5b676cd20c644b7fe5c1b9e203aa7",
"oa_license": "CCBYNC",
"oa_url": "https://rnojournal.binghamton.edu/index.php/RNO/article/download/276/251",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4d340518efb5b676cd20c644b7fe5c1b9e203aa7",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14371150 | pes2o/s2orc | v3-fos-license | Effect of genotype on duodenal expression of nutrient transporter genes in dairy cows
Background Studies have shown clear differences between dairy breeds in their feed intake and production efficiencies. The duodenum is critical in the coordination of digestion and absorption of nutrients. This study examined gene transcript abundance of important classes of nutrient transporters in the duodenum of non lactating dairy cows of different feed efficiency potential, namely Holstein-Friesian (HF), Jersey (JE) and their F1 hybrid. Duodenal epithelial tissue was collected at slaughter and stored at -80°C. Total RNA was extracted from tissue and reverse transcribed to generate cDNA. Gene expression of the following transporters, namely nucleoside; amino acid; sugar; mineral; and lipid transporters was measured using quantitative real-time RT-PCR. Data were statistically analysed using mixed models ANOVA in SAS. Orthogonal contrasts were used to test for potential heterotic effects and spearman correlation coefficients calculated to determine potential associations amongst gene expression values and production efficiency variables. Results While there were no direct effects of genotype on expression values for any of the genes examined, there was evidence for a heterotic effect (P < 0.05) on ABCG8, in the form of increased expression in the F1 genotype compared to either of the two parent breeds. Additionally, a tendency for increased expression of the amino acid transporters, SLC3A1 (P = 0.072), SLC3A2 (P = 0.081) and SLC6A14 (P = 0.072) was also evident in the F1 genotype. A negative (P < 0.05) association was identified between the expression of the glucose transporter gene SLC5A1 and total lactational milk solids yield, corrected for body weight. Positive correlations (P < 0.05) were also observed between the expression values of genes involved in common transporter roles. Conclusion This study suggests that differences in the expression of sterol and amino acid transporters in the duodenum could contribute towards the documented differences in feed efficiency between HF, JE and their F1 hybrid. Furthermore, positive associations between the expression of genes involved in common transporter roles suggest that these may be co-regulated. The study identifies potential candidates for investigation of genetic variants regulating nutrient transport and absorption in the duodenum in dairy cows, which may be incorporated into future breeding programmes.
Background
In dairy cow systems feed is the single greatest variable cost, accounting for up to 80% of the costs of production [1]. As profitability is directly linked to the efficient conversion of feed into milk, the identification of feed efficient animals is critically important to the economic sustainability of the enterprise. In dairy cattle, residual milk solids production is a measure of feed efficiency and can be used to identify animals that produce higher amounts of milk solids but have a similar level of feed intake to their herd counterparts [2]. Indeed, studies [3,4] have also shown clear differences between dairy breeds and their feed intake and efficiencies. Schwerin et al. [5] reported differences in nutrient utilisation between dairy and beef breeds and, specifically, that the expression of genes involved in nutrient transportation in the liver and intestine differed between Charolais and Holstein bulls. Furthermore, recent data from an Irish study has shown that dairy cow genotype affects the expression profiles of genes involved in energy homeostasis in duodenum and liver [6].
The duodenum plays a critical role in nutrient digestion and absorption and is the site of expression of key signalling molecules regulating energy homeostasis and feed efficiency in cattle [6]. A number of studies have previously examined the effect of diet type on the absorption of nutrients namely, sugar, nucleoside and amino acid in the duodenum of beef cattle [7][8][9]. However, there is a dearth of information detailing mineral and lipid transporter mRNA abundance in this tissue. Additionally, there is no information available on whether differences exist between contrasting dairy cow genotypes or animals of different feed efficiency potential for the absorption of nutrients in the small intestine. Thus the aim of this study was to determine the effect of dairy cow genotype on the expression profiles of a variety of genes involved in the transportation and absorption of nutrients and minerals in Holstein-Friesian (HF), Jersey (JE) and Holstein-Friesian Jersey cross (F 1 ). Gene transcript abundance of five important classes of nutrient transporters, namely nucleoside, amino acid, lipid, sugar and mineral transporters, was investigated.
Materials and methods
All procedures involving animals were carried out under a licence for the Irish Department of Health and Children in accordance with the European Community Directive 86/609/EC.
Experimental animals
This study was part of a larger experiment designed to evaluate the performance of three dairy genotypes, HF, JE and F 1 (JE × HF), on a pasture-based production system. All data were generated at the Ballydague research farm (52°8′N 8°26′W), Teagasc Moorepark Dairy Production Research Centre, Fermoy, Co. Cork, Ireland. Performance data were obtained from 110 animals, representing HF (n = 37), JE (n = 36) and F 1 (n = 37) cows and was calculated as described by Prendiville et al. [3].
Tissue sample collection
At the end of lactation, cows were dried off and subsequently fed grass silage ad libitum for two months. A sub group of 30 cows from the initial 110 were randomly selected for inclusion in this study representing 10 HF, 10 JE and 10 F 1 . All 30 animals were slaughtered in a licensed abattoir (Dawn Meats, Charleville, Co. Cork, Ireland). Duodenal tissue (5 cm long) was harvested approximately 15 cm distal to the abomasal-duodenal juncture. Tissue samples were washed in DPBS. Epithelial tissue was then scraped from the underlying connective and muscular tissue using a glass microscope slide. The tissue was washed with sterile phosphate buffered saline (PBS), snap frozen in liquid nitrogen and subsequently stored at −80°C. All instruments used for tissue collection were sterilised and treated with RNA Zap (Ambion, Dublin, Ireland) before use.
RNA extraction and purification
Total RNA was isolated from approximately 40 mg of duodenal epithelial tissue using TRIzol reagent and chloroform (Sigma-Aldrich Ireland, Dublin, Ireland). Tissue samples were homogenised using a tissue lyser (Qiagen, UK), following which the RNA was precipitated using isopropanol. Samples were then treated with RQ1 RNase-free DNase (Promega UK, Southhampton, UK), according to the manufacturers instructions in order to remove any contaminating genomic DNA. The quantity of the RNA isolated was determined by measuring the absorbance at 260 nm using a Nanodrop spectrophotometer ND-1000 (NanoDrop Technologies, DE, USA). RNA quality was assessed on the Agilent Bioanalyser 2100 using the RNA 6000 Nano Lab Chip kit (Agilent Technologies Ireland Ltd., Dublin, Ireland). RNA quality was verified by ensuring all RNA samples had an absorbance (A 260/280 ) of between 1.8 and 2. RNA samples with 28S/18S ratios ranging from 1.8 and 2.0 and RNA integrity numbers (RINs), which is a measure of RNA quality based on the integrity of 18 and 28S ribosomal RNA, of between 8 and 10 were deemed high quality.
Complementary DNA synthesis
Total RNA (1 μg) was reverse transcribed into cDNA using a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA,. USA) using the Multiscribe™ reverse transcriptase according to manufacturers instructions. Samples were stored at −20°C for subsequent analyses.
Primer design and reference gene selection
All the gene specific primers used in this study were designed using the web based software program Primer 3 (http://frodo.wi.mit.edu/primer3/). Potential primers were then subjected to BLAST analysis (http://www.ncbi.nlm. nih.gov/BLAST/), in order to confirm primer specificity and also to ensure that they were homologous to the bovine sequences. All primers for reference and specific target genes were obtained from a commercial supplier (Sigma-Aldrich Ireland, Dublin, Ireland). Details of primer sets used in this study are listed in Additional file 1: Table S1. All amplified PCR products were sequenced to verify their identity (Macrogen Europe, Meibergdreef 39, 1105AZ Amsterdam, The Netherlands).
In order to select stable reference genes relevant to duodenal tissue, analysis of putative reference genes was carried out using the geNorm version 3.4 Excel software package (Microsoft, Redmond, WA). Ct values were transformed to relative quantities using the comparative delta Ct method, to facilitate the calculation of the M value within geNorm software. The software calculates the intra-and intergroup CV and combines both coefficients to give a stability value minus a lower value implying a higher stability in gene expression. A gene was considered to be sufficiently stable within the duodenal tissue, if an M value of less than 1.5 was generated. Within this range of parameters, beta-actin (ACTB), glyceraldehydes 3-phosphate dehydrogenase (GAPDH) and ribosomal protein SP (RPS9) were selected as being suitable reference genes for this study.
Quantitative real time PCR (qPCR)
Following reverse transcription, cDNA quantity was determined and standardised to the required concentration for qPCR. Triplicate 20 μL reactions were carried out in 96-well optical reaction plates (Applied Biosystems, Warrington, UK), containing 1 μL cDNA (10-50 ng of RNA equivalents), 10 μL Fast SYBR® Green PCR Master Mix (Applied Biosystems, Warrington, UK), 8 μL nuclease-free H 2 O, and 1 μL forward and reverse primers (250-1000 nM per primer). Assays were performed using the ABI 7500 Fast qPCR System (Applied Biosystems, Warrington, UK) with the following cycling parameters: 95°C for 20 s and 40 cycles of 95°C for 3 s, 60°C for 30 s followed by amplicon dissociation (95°C for 15 s, 60°C for 1 min, 95°C for 15 s and 60°C for 15 s). Amplification efficiencies were determined for all candidate and reference genes using the formula E = 10 ∧ (−1/slope), with the slope of the linear curve of cycle threshold (Ct) values plotted against the log dilution [10]. Only primers with PCR efficiencies between 90% and 110% were used. The software package GenEx 5.2.1.3 (MultiD Analyses AB, Gothenburg, Sweden) was used for efficiency correction of the raw cycle threshold (Ct) values, interplate calibration based on a calibrator sample included on all plates, averaging of replicates, normalization to the reference gene and the calculation of quantities relative to the greatest Ct. Expression of each target gene was normalised to the reference genes and relative differences in gene expression were calculated using the 2 -ΔΔCT method [11].
Statistical analysis
All data were analysed using Statistical Analysis Systems (SAS Institute, Cary, NC; version 9.2). Data were tested for adherence to a normal distribution using the UNIVARIATE procedure of SAS. A Box-Cox transformation analysis was performed using the Transreg procedure in SAS to obtain appropriate lambda values for data which were not normally distributed. These data were then transformed by raising the variable to the power of lambda. A mixed model ANOVA (PROC MIXED, SAS) was conducted to determine the effect of genotype on the relative expression of each gene measured. The Tukey critical difference test was performed to determine the existence of statistical difference between the treatment groups. In an effort to determine whether there was any evidence for potential heterotic effects on the expression of genes of interest, orthogonal contrasts were used to examine differences between the combined mean of expression values for Holstein-Friesian and Jersey animals compared with their F 1 hybrid. Spearman partial correlation coefficients were calculated to determine associations among gene expression values for each gene in the duodenum in addition to associations amongst gene expression and production efficiency variables, including residual feed intake (RFI), total milk solid (kg) produced over a 305 day lactation period per 100 kg (SOLIDS_WGT), and milk solids produced (kg) per kg of total dry matter intake (SOLIDS_TDMI), using the CORR procedure of SAS. Data were corrected for the fixed effects of both cow genotype and parity.
Effect of genotype on cow production efficiency
A more comprehensive explanation of the genotypes, experimental design, grazing management, sward composition, feed intake and production efficiency measurements has been reported [3]. In brief, genotype had a number of statistically significant effects on cow productive efficiency. For example, daily milk solids yield (MLKS; fat and protein yield) was similar for HF and JE but JE was lower than the F 1 cows (1.33 kg for HF, 1.28 kg for JE and 1.41 for F 1 ). Body weight was higher for HF (577 Kg compared to 435 kg for JE with the F 1 intermediate (520 kg; P < 0.05), whereas body condition score was highest (P < 0.05) for the F 1 cows (3.00 compared to 2.76 for HF and 2.93 for JE).
Dry matter intake (DMI) per unit body weight (3.99 kg for JE compared to 3.39 kg for HF and 3.63 kg for F 1 ) and gross production efficiency (0.088 kg for JE compared to 0.087 Kg for F 1 In addition, at slaughter Jersey tissues internal organs (or components of the GIT) weighed less than tissues recovered from cows of the other two breed types with the exception of the omasum, which did not differ in size between breeds. On a proportion of metabolic liveweight basis, HF cows had a smaller rumen-reticulum, abomasum and total GIT than both J and F 1 cows. However, when expressed as a proportion of metabolic liveweight, the weight of these organs did not differ between the three breed types and were similar [12].
Effect of cow genotype on the expression of genes in duodenal tissue
The effect of cow genotype on the expression of genes involved in nutrient and mineral absorption in the duodenum is presented in Table 1. Out of 27 genes tested, 19 were found to be expressed in duodenal tissue. Of all the genes studied, only one ABCG8 (P = 0.042), was identified as significantly differentially expressed between groups. However, there was a strong tendancy towards mRNA expression levels for SLC3A1 (P = 0.072), SLC3A2 (P = 0.081) and SLC6A14 (P = 0.072) being different between the three genotypes. There was evidence for a heterotic effect (P < 0.05) on the duodenal expression of the lipid transporter ABCG8, with expression levels higher for the F 1 genotype, compared with the mean of the two parent breeds. There was no evidence for any potential heterotic effects (P > 0.10) for the expression of any other gene studied in duodenal tissue.
Associations between duodenal gene expression values and animal production efficiency variables
A Spearman partial correlation analysis was conducted to determine the association between the expression of genes involved in nutrient and mineral absorption in the duodenum and feed efficiency variables, previously reported by Prendiville et al. [3,13]. Correlation coefficients for these associations are presented in Table 3.
Only one association was identified as reaching statistical significance, viz. the correlation between SLC5A1 gene expression and total milk solids produced over a 305 day lactation period (kg) per 100 kg of body weight (r = 0.93; P < 0.05).
Discussion
Heterosis, or hybrid vigour, where progeny show increased fitness relative to their parents [14] is of economic importance in livestock production [15]. Positive effects of heterosis on growth and BW traits [16,17] and feed efficiency [17] have been reported for beef cattle. In dairy cattle, crossbreeding programmes utilising divergent cow breeds such as HF and JE cows have been explored to address the demands of the dairy industry [18]. Differences in feed intake capacity and production efficiency in lactating HF, JE and their F 1 have previously been presented by Prendiville et al. [3]. The resulting F 1 progeny have demonstrated promise in improving several traits associated with milk production including feed efficiency [3]. Gozho and Mutsvangwa [19] showed that improved production performance with corn and barley diets appeared to be due to greater nutrient absorption in dairy cows fed oats and grass silage diets. It has been postulated that improvements in digestion or absorption of dietary energy and protein are a possible mechanism to explain variation in feed efficiency [20,21]. In the current study we hypothesised that the improvement in feed efficiency observed in the F 1 genotype is due to an enhancement in nutrient absorption in the GIT possibly mediated through a modification of gene expression in the nutrient transporters.
We have recently shown that key genes involved in energy homeostasis and appetite behaviour, including POMC and GLP1R, were differentially expressed in the duodenum and liver, between contrasting cow genotypes, in a tissue dependent fashion [6]. There is, however, a dearth of information regarding the effect of dairy cow genotype on the expression of genes involved in nutrient absorption and transport in the small intestine of cattle and their relationship with production efficiency variables. To uncover potential molecular mechanisms controlling the documented differences in production efficiencies between contrasting breeds, an investigation into the expression of nutrient transporter genes was employed. The duodenum, which is the first section of the small intestine, is a major site of nutrient absorption in all animals [22] and has also been shown to be sensitive to nutritional changes [9]. The current study focussed on examining duodenal gene expression profiles. To our knowledge, this is the first examination of the expression of nutrient transporters in the duodenal tissue of dairy cows.
Of all the nutrient transporter genes analysed, the lipid transporter ABCG8, was the only gene found to be differentially expressed across genotype. In addition, heterotic effects in the duodenal expression of ABCG8 were also observed with mean expression higher in the F 1 animals compared with the mean of the two parent breeds. ABCG8 is a transporter of dietary cholesterol. While it The probability of a coefficient not being statistically different from zero is denoted as follows: *P < 0.05, **P < 0.01 and ***p < 0.001. RFI; Residual Feed Intake. SOLIDS_WGT; Total milk solids produced over a 305 day lactation period (kg) per 100 kg body weight (kg). SOLIDS_TDMI; Total milk solid produced over a 305 day lactation period (kg) total dry matter intake. Bold represents significant (P<0.05) results in terms of difference between means in Table 1 or correlations in Tables 2 and 3.
is usually found co-expressed with ABCG5, there was no evidence for expression of this latter gene in duodenal tissue of dairy cows in the current study. Viturro et al. [23] examined the gene expression of sterol transporters ABCG5 and ABCG8 in a range of bovine tissues including the intestine. While expression was detected in the abomasum, jejunum and colon, the duodenum was not examined in that study. We have therefore shown expression of the ABCG8 gene for the first time in the duodenum of the bovine. The protein encoded by this gene functions to exclude non-cholesterol sterol entry at the intestinal level, promote excretion of cholesterol and sterols into bile, and to facilitate transport of sterols back into the intestinal lumen. It is expressed in a tissuespecific manner in the liver, intestine, and gallbladder. As plant sterols are a major component of the ruminant diet [23] expression of this gene in the duodenum is not surprising. Therefore it is hypothesised that increased expression of this gene in the F 1 genotype may lead to enhanced transport of plant sterols, potentially lowering serum and milk cholesterol levels and contribute to improved feed efficiency compared to the parent breeds. Future studies should focus on the functional role of ABCG8 in the digestive tract of ruminants and how it may improve feed digestion and nutrient utilisation in cattle. The fatty acid transporter CD36 is frequently detected in tissues such as adipose [24] and mammary [25]. Recently, the expression of CD36 was shown to be dependent on diet and region of the intestine, with greater expression recorded in the upper jejunum compared to the ileum [26] however in the current study mRNA expression of this gene was not detected. Amino acids are essential for optimal growth in cattle however there is little information available on amino acid transporter proteins expressed by the duodenum in dairy cattle. We observed a strong tendency towards increased expression of the amino acid transporter genes SLC3A1, SLC3A2 and SLC6A14 in the F 1 genotype, compared with the two parent breeds, consistent with the enhanced production efficiency reported for this genotype. SLC3A1 is involved in sodium independent transport of cystine, neutral and dibasic amino acids across the cell membrane. There is little published data available on this gene for cattle but it has been extensively studied in the human [27]. In the current study, expression of SLC3A1 was strongly correlated with SLC6A14, SLC7A6 and SLC7A7 mRNA abundance. SLC7A7 is involved in the sodium dependent uptake of certain neutral amino acids and the sodium independent uptake of dibasic amino acids. It requires the co-expression of SLC3A2 to mediate the uptake of arginine, leucine and glutamine which probably explains the high correlation between these two genes. SLC3A2 is involved in light chain amino acid transport and functions as a sodium independent transporter of large neutral amino acids such as leucine, arginine, tyrosine and phenylalanine. SLC6A14 has a role involved in the sodium and chloride dependent transportation of neutral and basic amino acids. In a study by Liao et al. [8] the regulation of this gene amongst others was shown to be strongly regulated by diet. Furthermore gene expression of SLC5A1 was negatively correlated with total milk solids produced over a 305 day lactation period (kg) per 100 kg of body weight. Liao et al. [8] found expression of SLC7A9 to be extremely low in the duodenum of beef steers and indeed, in our study, no expression of this gene was detected in the duodenal tissue of dairy cows. This could also be due to diet effects as animals in the current study were fed grass only while steers in the study of Liao et al. [8] were fed cornstarch, partially hydrolyzed by a heat-stable α-amylase. Chen et al. [28] found the gene SLC15A1 to be expressed in the duodenum, jejunum and ileum of cattle while there was no expression detected in stomach, large intestine, liver, kidney and longissimus muscle tissue indicating that this gene is only expressed in the GIT.
In cattle, microbial-derived nucleic acids serve as a source of N and are absorbed as nucleosides through the small intestinal epithelia. Nucleosides are important nutrients for the development of gut and immune system function [29]. A supply of nucleosides is essential for many biological processes during animal development and growth, including DNA and RNA syntheses, energy (ATP) production, N and P recycling, cell signalling, and modulation of gene expression [29]. Liao et al. [30] showed that mRNA for nucleoside transporters are expressed throughout the small intestinal epithelia of growing beef steers and can be increased by augmenting the luminal supply of nucleotides. Nucleoside carriers bind to sodium ions as well as the nucleosides being transported. Consistent with our study, Liao et al. [30] found that SLC28A1, SLC28A2, SLC28A3, SLC29A1 were expressed in the duodenum of beef steers. However their group also detected mRNA expression of SLC29A2 which we failed to detect in dairy cow duodenal tissue in our study potentially due to differences in the basal diet offered. SLC28A1 is a sodium coupled nucleoside transporter which has a higher affinity for binding to pyrimidines such as cytosine and thymine which enters a cell across a concentration gradient and uses the flow of sodium ions for transport into cells [31]. The expression of this gene was highly correlated with the expression of SLC28A2, which also codes for a sodium coupled nucleoside transporter and functions in the same manner. The high level of correlation could be due to the fact that SLC28A2 has a high affinity for purines such as adenosine and guanine and the expression of both of these genes are required for equal absorption of purines and pyrimidines. SLC28A3 is both purine and pyrimidine selective and functions in a similar fashion to both SLC28A1 and SLC28A2. Expression of SLC28A3 is highly correlated with SLC29A1 which is an equilibrative transporter. Unlike the other three nucleoside transporters studied, SLC29A1 is sodium independent and mediates the influx and efflux of nucleosides across a cell membrane. While there are studies published on the expression of this gene in cattle intestines [30], it has been extensively studied in humans due to its potential in aiding the uptake of chemotherapeutic drugs. None of the nucleoside transporters examined in the current study were differentially expressed between genotypes.
The absorption of monosaccharides from the small intestinal lumen of cattle involves sugar transporters, such as sodium-dependent glucose transporter 1 (encoded by the gene SLC5A1) which transports glucose and galactose; whereas glucose transporter (GLUT) 5 (GLUT5; encoded by the gene SLC2A5) transports fructose, across the apical membrane of enterocytes. Liao et al. [8] examined the expression profiles of glucose transporters along the intestinal tract. SLC5A1 is a sodium-glucose co-transporter and transcription of this gene has been extensively studied in humans. Work on the bovine SLC5A1 gene has been conducted by Wood et al. [32] and Liao et al. [8]. Recently the expression of SLC5A1 in small intestinal epithelia was found to be influenced by the level of milk replacer fed to bull calves [26] and suggests that feeding high levels of milk replacer to calves can offer an advantage for greater uptake of lactose. SLC2A2 is a facilitated glucose transporter and is highly conserved among mammals such as humans, dogs, mice and rats. SLC2A5 is a cytochalasin B (a mycotoxin) sensitive fructose transporter. In our study expression of SLC2A5 was highly correlated with that of SLC2A2, possibly due to the fact that they both transport sugars. While we failed to detect an effect of genotype on the expression of sugar transporter genes here, a negative association was observed between the expression of the glucose transporter gene SLC5A1 and total lactational milk solids corrected for body weight. Expression levels of SLC5A1, SLC2A2 and SLC2A5 were highly correlated in the current study. Similar to amino acid transporters, the expression of the sugar transporters is possibly coregulated.
Conclusions
Taken together with the associated study of Alam et al. [6]. These data suggest a possible role for DEG in enhancing feed and production efficiency of dairy cows through improved facilitated absorptive capacity in the duodenum. There is evidence of enhanced expression of key genes involved nutrient transport in the F 1 genotype, compared with the two parent breeds, consistent with the enhanced production efficiency reported for this genotype. Expression of some genes involved in common nutrient transport roles are positively correlated, suggesting that these may be co-regulated. However a global gene expression approach, using tools such as microarrays or RNAseq, across regions of the GIT between breeds and individuals within breeds, is required to gain a greater understanding of the molecular control of feed efficiency and the contribution of GIT tissues in dairy cows. Furthermore, this study identifies potential candidates for investigation of genetic variants regulating nutrient transport and absorption in the duodenum in dairy cows, which may be incorporated into future breeding programmes. | 2017-06-04T12:27:11.647Z | 2013-12-09T00:00:00.000 | {
"year": 2013,
"sha1": "f1d7fbaf66826e6d5d4b4fc97bcd7bdf68a79749",
"oa_license": "CCBY",
"oa_url": "https://jasbsci.biomedcentral.com/track/pdf/10.1186/2049-1891-4-49",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1d7fbaf66826e6d5d4b4fc97bcd7bdf68a79749",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258570373 | pes2o/s2orc | v3-fos-license | Risk factors for perioperative acute heart failure in older hip fracture patients and establishment of a nomogram predictive model
Aim This study aims to explore the risk factors for perioperative acute heart failure in older patients with hip fracture and establish a nomogram prediction model. Methods The present study was a retrospective study. From January 2020 to December 2021, patients who underwent surgical treatment for hip fracture at the Third Hospital of Hebei Medical University were included. Heart failure was confirmed by discharge diagnosis or medical records. The samples were randomly divided into modeling and validation cohorts in a ratio of 7:3. Relevant demographic and clinic data of patients were collected. IBM SPSS Statistics 26.0 performed univariate and multivariate logistic regression analysis, to obtain the risk factors of acute heart failure. The R software was used to construct the nomogram prediction model. Results A total of 751 older patients with hip fracture were enrolled in this study, of which 138 patients (18.37%, 138/751) developed acute heart failure. Heart failure was confirmed by discharge diagnosis or medical records. Respiratory disease (odd ratio 7.68; 95% confidence interval 3.82–15.43; value of P 0.001), history of heart disease (chronic heart failure excluded) (odd ratio 2.21, 95% confidence interval 1.18–4.12; value of P 0.010), ASA ≥ 3 (odd ratio 14.46, 95% confidence interval 7.78–26.87; value of P 0.001), and preoperative waiting time ≤ 2 days (odd ratio 3.32, 95% confidence interval 1.33–8.30; value of P 0.010) were independent risk factors of perioperative acute heart failure in older patients with hip fracture. The area under the curve (AUC) of the prediction model based on these factors was calculated to be 0.877 (95% confidence interval 0.836–0.918). The sensitivity and specificity were 82.8% and 80.9%, respectively, and the fitting degree of the model was good. In the internal validation group, the AUC was 0.910, and the 95% confidence interval was 0.869–0.950. Conclusions Several risk factors are identified for acute heart failure in older patients, based on which pragmatic nomogram prediction model is developed, facilitating detection of patients at risk early.
Introduction
Heart failure (HF) is a common perioperative complication in older people with hip fracture, and it is also the second leading cause of in-hospital death, with an incidence ranging from 5.5 to 21.3% [1,2]. The number of operations for hip fracture patients has increased consistently worldwide during the past decades [3][4][5][6]. The harmful effects of perioperative heart failure make it an important problem in the healthcare system. Previous studies showed that perioperative acute heart failure (AHF) substantially increased mortality within 30 days after surgery to 65%, prolonged the average length of stay of patients by 4 days, and increased the average hospitalization cost by about 5500 euros [7][8][9][10].
At present, many scholars pay more attention to AHF, which has lead to a significantly improved understanding of its etiology. Cardiac history, age, anemia, and ASA score have been shown to be risk factors for AHF [11][12][13][14][15]. However, these studies only obtained single and scattered risk factors for HF, which were unable to be translated into risk scales or prediction models. The New York Heart Association cardiac function rating scale and Goldman's cardiac risk index (GCRI) were commonly used cardiac function assessment scales, but were not perfect. The first is a simple way to give scores according to patient complaints, which thus is easily affected by patients' subjective feelings and clinicians' subjective judgment, so it is somewhat biased when grading patients with mild heart failure [16,17]. GCRI can evaluate the risk of perioperative cardiac complications, but it lacks model validation [18,19]. The prediction model can integrate and quantify various risk factors, which can help medical personnel to individualize stratification and risk. Furthermore, although these two models have some practicability, none of them was specifically designed specifically for the assessment of older patients with hip fracture.
Given the high incidence of AHF in older patients with hip fractures, it has become increasingly necessary to establish a prediction model for AHF. Therefore, we designed this study to explore the risk factors for perioperative AHF in old patients with hip fracture and to build a nomogram model.
Study design and study population
In this study, data of older patients who underwent hip fracture surgery at our hospital from January 2020 to December 2021 were retrospectively collected. All included patients were 60 years or older and received surgical treatment for hip fractures. Exclusion criteria were: (1) patients with chronic heart failure; (2) missing data; (3) old fracture (21 days after injury); (4) multiple fractures; (5) pathological fractures. This study followed the Declaration of Helsinki and the protocol was approved by the Ethics Committee of the Third Hospital of Hebei Medical University with approval number 2022-014-1. This study is a retrospective study, so informed consent of patients was obtained by phone. Orthopedics, internal medicine physicians, and geriatric specialists jointly treat patients. By combining the patient's condition and examination results, the group discussed and decided on the infusion plan and treatment plan during the perioperative period.
Definition of heart failure
In this retrospective study, the investigators reviewed the medical records to determine whether patients had perioperative AHF. The diagnostic criteria for AHF refer to the 2021 European Society of Cardiology guidelines for the diagnosis and treatment of acute and chronic heart failure [20]. The diagnosis of AHF was based on clinical symptoms (dyspnea, lung rales, lower limb edema, and rapid heart rate), laboratory examinations, and imaging examinations. At the same time, B-type natriuretic peptide (BNP) and cardiac troponin I (cTn I) should be considered together.
Research methods
Two researchers collected 34 variables from demographic variables, operation-related variables, and laboratory parameters. Demographic variables included the gender of the patients, age, body mass index (BMI), fracture site (intertrochanteric fracture or femoral neck fracture), injury mechanism (low-energy or high-energy), the time from injury to admission, number of complications, comorbidities (anemia, hypertension, diabetes, cerebrovascular disease, etc.). Operation-related variables included the American Society of Anesthesiologists (ASA) classification, anaesthesia type, operation time, etc. Laboratory parameters included hemoglobin (HB), serum potassium concentrations, BNP, cTn I, etc. BMI was calculated by dividing the weight by height squared. According to the patient's physical condition and the risk of surgery, the ASA classification divides patients into grades 1-5 (grade I: patients could tolerate the procedure well; grade II: patients had the mild systemic disease but no dysfunction; grade III: The patient had the severe systemic disease and certain dysfunction; grade IV: patients had the severe systemic disease and high anesthesia risk; grade V: dying patients).
To guarantee the homogeneity of the research objects, the researchers strictly implemented the inclusion and exclusion criteria. After two researchers inputted data, all data were cross-checked by a consultant researcher, also a researcher of this study. For suspicious or inconsistent data, the researchers corrected it by again referring to the medical records.
Statistical analysis
In this study, IBM SPSS statistics 26.0 software was used for statistical analysis. Continuous variables were described as means ± standard deviation ( X ± SD), and categorical variables were displayed as rates or percentages. Two independent sample t-tests or the Mann-Whitney U test was used to compare differences between groups for continuous variables, while the Chi-square test or Fisher exact test was performed for categorical variables. The significance threshold was established at P < 0.05. Variables with a value of P < 0.05 in the univariate analysis were candidate variables in the multivariate models. Univariate and multivariate analyzes were conducted to determine the independent risk factors. The prediction efficiency of the model was analyzed using the receiver operating characteristic (ROC) curve. The Hosmer-Lemeshow test was used to evaluate the goodness-of-fit of the prediction model, and P > 0.05 indicated an accepted fitness. We validated the model using an internal data set. The predictors that were significantly associated with AHF were entered into the R software for the construction of the nomogram.
Baseline characteristics
As shown in Fig. 1, a total of 751 patients were included in this study, of which 520 were included in the modeling group using a simple random sampling method in a proportion of 7:3. Among the modeling group, 90 were patients with AHF and 430 were non-AHF patients. There were 138 patients (18.37%, 138/751) who developed AHF, with 3.86% for preoperative AHF and 14.51% for postoperative AHF. As shown in Tables 1 and 2, we can see that age, age-adjusted Charlson comorbidity index (ACCI), fracture type, preoperative waiting time, respiratory disease, acute kidney injury, history of heart disease (chronic heart failure excluded), anemia at admission, ASA ≥ 3, left ventricular ejection fraction (LVEF), BNP value at admission, Hb value at admission, blood transfusion before the operation, and serum potassium value at admission were statistically significant factors (P < 0.05). Given that intraoperative variables were not statistically significant in the univariate analysis (P > 0.05), their impact on the logistic multivariate regression analysis and the construction of the prediction model can be overlooked. Therefore, we did not further stratify our analysis by the occurrence time of heart failure and classified perioperative AHF into preoperative AHF and postoperative AHF. The result of multivariate analysis showed respiratory diseases (OR 7.68, P = 0.001), history of heart disease (chronic heart failure excluded) (OR 2.21, P = 0.010), preoperative waiting time ≤ 2 days (OR 3.32, P = 0.010), and ASA class ≥ 3 (OR 14.46, P = 0.001) were (Table 4).
Prediction model construction
Based on the above results of multivariate logistic analysis, we built a prediction model, Z = − 5.964 + 2.039 × (Respiratory disease) + 0.792 × (history of heart disease (chronic heart failure excluded)) + 2.671 × (ASA class ≥ 3) + 1.200 × (Preoperative waiting time ≤ 2 days). Figure 2 shows a nomogram of the risk of perioperative AHF in older hip fracture patients. According to the classification of variables in the nomogram, the scores corresponding to each index can be obtained and the total score can be calculated by adding the scores. The prediction probability corresponding to the total score is the probability of perioperative AHF. Figures 3 and 4 represent the receiver operating characteristic (ROC) curve of the prediction model in the modeling group and validation group, respectively.
Validation of the prediction model
A total of 231 older hip fracture patients who met the inclusion and exclusion criteria were used as the external validation group. The efficacy of model validation was verified through the external data set. As shown in Table 5, there was no significant difference between the modeling group and the validation group in the comparison of the baseline data (P > 0.05). To some certain Table 1 The characteristics of modeling group at admission BMI body mass index, ACCI age-adjusted Charlson comorbidity index, HB hemoglobin Values are mean ± standard deviation or number (percentage) as appropriate. P < 0.05, statistical significance. In this study, anemia was defined as hemoglobin level < 120 g/L for males and < 110 g/L for females; Hypoproteinemia refers to serum albumin less than 35 g/L
Variables
Heart failure group Non heart failure group P
Discussion
AHF is a common and serious complication in older patients with hip fractures. In this study, we found that the prevalence of AHF was 18.37%. The results demonstrate that respiratory disease, history of heart disease (chronic heart failure excluded), ASA ≥ 3, and preoperative waiting time ≤ 2 days were independent risk factors for perioperative AHF in older hip fracture patients. Fig. 2 The nomogram of the risk of perioperative acute heart failure in older hip fracture patients. In this study, heart disease refer to history of heart disease (chronic heart failure excluded) The AUC of the risk prediction model constructed was 0.877 in the modeling group and 0.910 in the validation group. It can be seen that the prediction model is highly accurate in identifying the occurrence of perioperative AHF. Regarding the Hosmer-Lemeshow test, the P values were all greater than 0.05, indicating that the degree of calibration of the prediction model is good. The nomograph drawn based on the model visualizes the risk and makes the model more scientific and practical.
The worse the patient's health condition before injury, the more likely they will have perioperative AHF, especially the patients with respiratory diseases and cardiac history. Paul found that the chance of heart failure in patients with chronic obstructive pulmonary disease (COPD) is 2.17 times high than in those without COPD, which is close to 1.9 times that of Truls et al. [21,22]. Patients with COPD often have decreased lung function and increased lung volume, which may lead to myocardial injury and left ventricular diastolic dysfunction, inducing the occurrence of AHF [23,24]. In this study, patients with a history of heart disease (chronic heart failure excluded) have a 2.21-fold increased risk of AHF, compared to patients without heart disease before injury. Currently, the contribution of heart disease to the AHF is explained by volume overload. Cardiovascular disease can lead to the weakening of cardiac pumping functions and fibrosis of the myocardial structure, increasing the risk of AHF [25]. For patients with cardiopulmonary insufficiency, airway management, and volume management should be strengthened to stabilize the internal environment. In our institution, we encourage patients to maintain the patency of the airway through effective coughing and deep breathing, and on the other hand, clinicians should adjust the rehydration plan timely to keep the circulation volume consistent with cardiovascular function.
In this study, we found that ASA ≥ 3 was the strongest predictor of perioperative AHF in older hip fracture patients, with an OR of 14.46 as compared with those with lower ASA class. ASA grading standard is a scoring system that can be used to assess patients' operative risk of patients and guide resource allocations [26,27]. In addition, a higher ASA class was demonstrated to be associated with a higher probability of pulmonary embolism, myocardial infarction, and heart failure, and a poorer health condition and operative tolerance [13,28]. Given that, orthopedic surgeons and anesthesiologists should jointly conduct preoperative visits and anesthesia risk assessments for patients with high ASA scores.
The optimal timing of surgery for older patients with hip fracture has still been controversial. Relevant guidelines from abroad recommend that patients with hip fractures should receive early aggressive surgical treatment within 48 or even 24 h after injury [29,30]. However, some researchers found that early surgical intervention in medically unstable patients can increase the risk of mortality [31], which could partially explain the finding of this study that patients with early surgical treatment (< 48 h) have a relative risk of AHF of 3.32, compared with those with ≥ 48 h of preoperative wait. Additionally, the quality of preoperative preparation may affect the prognosis of the patients. The older adults usually have multiple comorbidities, thus requiring more time to optimize clinical conditions to better tolerate the upcoming operation. However, scholars have inconsistent views on whether the delay of surgery will increase the risk of complications [32][33][34], with different or specific medical environments that could result in variable therapeutic effects of early surgery. On the other hand, affected by the allocation of medical resources, it is difficult for most hospitals to perform surgery within 48 h after injury in China. Therefore, how to choose the operation time, especially optimal timing for patients with great clinical benefit, remains a concern. More attention should be paid to the preoperative optimization of medical conditions, not only limited by the specific time of early surgery. Especially, for hip fracture patients who have heart disease or respiratory disease and are in poor general condition, it is better to prepare the condition for surgery rather than perform surgery while the general condition is unstable.
Nomograms integrate multiple independent risk factors identified via the multivariate regression analysis, and assign a value according to the contribution of each risk factor to the outcome variable, proving an excellent results visualization tool. In this study, the prediction model includes four easily available risk factors. Clinicians only need to draw vertical lines according to a specific proportion to obtain the prediction probability of each variable and calculate the sum to obtain the final probability of risk. This model helps doctors monitor the risk of AHF regularly and guide medical personnel in correcting potential changeable risk factors, thereby facilitating the reduction of perioperative AHF. The merits of this study were the establishment of a nomogram for visualized assessment of risk factors for perioperative AHF in older hip fracture patients. However, several limitations should be mentioned. First, there was a bias in the selection of subjects, because this study was a single-center retrospective study. Therefore, the results may have been affected by the inaccuracy of the collected data and the absence of external validation, requiring prospective multicenter studies to verify. Second, although a multivariate regression model was used to minimize confounders, there are still unknown or unmeasured confounders, such as the perioperative fluid balance condition, the perioperative defecation condition, etc. Third, the study sample was limited, thus having less power to detect the significance of some infrequent variables, such as renal failure, which was more likely associated with electrolyte disturbance and caused adverse cardiac events [35,36]. Fourth, the relationship between the identified factors and the incidence of AHF was associative rather than causal, thus, it should be carefully interpreted. However, the causal relationship between preoperative preparation time and preoperative heart failure cannot be determined, which needs to be explored by further research.
Conclusion
In summary, we observed that the overall incidence of perioperative AHF in older patients undergoing hip fracture surgery was 18.37%. Preoperative respiratory disease, history of heart disease (chronic heart failure excluded), preoperative preparation time ≤ 2 days, and ASA class ≥ 3 were the independent risk factors for perioperative AHF, and further forming the readable nomogram to facilitate its use in practice and, subsequently, the potential reduction of AHF. Future studies with prospective and multicenter designs are warranted to verify our findings. | 2023-05-10T14:21:45.956Z | 2023-05-10T00:00:00.000 | {
"year": 2023,
"sha1": "2fafd2041062f354c4f1e8a132a7765086bfb485",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "2fafd2041062f354c4f1e8a132a7765086bfb485",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220260295 | pes2o/s2orc | v3-fos-license | Role of Endoluminal Techniques in the Management of Chronic Type B Aortic Dissection
In recent guidelines of international societies, the most frequent indication for treatment after chronic type B aortic dissection (cTBAD) is aneurysmal dilatation. Endovascular repair is recommended in patients with moderate to high surgical risk or with contraindications to open repair. During the last decade, many advances have been made in the field of endovascular techniques and devices. The aim of this article is to address the current status of endoluminal techniques for the management of cTBAD including standard thoracic endovascular repair, new devices, fenestrated and branched abdominal aortic devices and false lumen occlusion techniques. Electronic supplementary material The online version of this article (10.1007/s00270-020-02566-7) contains supplementary material, which is available to authorized users.
Introduction
Guidelines on the treatment of aortic dissection have traditionally supported that uncomplicated type B aortic dissection (TBAD) is treated by best medical treatment (BMT) [1,2]. However, about 25 to 50% of patients who survive the acute phase will require open repair or thoracic endovascular aortic repair (TEVAR) during the chronic phase [3,4].
The INSTEAD XL study showed improved survival and delayed disease progression of survivors of TBAD who underwent TEVAR in addition to BMT during the subacute phase (14-90 days) [5]. A recent systematic review [6] highlighted that secondary interventions after BMT ranged between 9.0% and 40.6% in patients with TBAD. The lack of follow-up data for conservatively treated patients, presence of heterogeneity in patients and absence of consensus reporting standards for TEVAR are obstructing the interpretation of outcomes [7].
No randomized controlled trial exists comparing open surgical repair (OSR) and TEVAR for cTBAD treatment. In a systematic review by Kamman et al. [6], mortality of TEVAR for cTBAD was favorable compared to OSR. Another recent study demonstrated that TEVAR for cTBAD even in complicated cases was safe and effective. While aortic remodeling was favorable proximal to the coeliac artery after TEVAR, the low rate of distal false lumen thrombosis warranted further imaging surveillance [8].
TEVAR for aortic dissection started 20 years ago [9] and is still developing with novel techniques and devices. The aim of this article is to address the current status of endoluminal techniques for the management of cTBAD.
Risk Factors for Late Aortic Events in Patients with Uncomplicated TBAD
TEVAR is well accepted for patients with acute and chronic TBAD. Real world data attempt to fill the gap of potential risk factors associated with worse outcomes and may identify patients who benefit from TEVAR [4,6].
Indications for Endovascular Intervention in cTBAD
In recent guidelines [1,2,[13][14][15], the most frequent indication for treatment after cTBAD is the aneurysmal dilatation (Table 1) Japanese circulation society [14] Erbel et al. [13] Riambau et al. [1] Appoo et al. [15] Antihypertensive therapy to reduce the risk of aortic related death in patients with chronic aortic dissection been developed. The fundamental concept of TEVAR in cTBAD is to cover the proximal entry tear, to redirect flow to the TL, and to achieve thrombosis and regression of the FL. Longer aortic coverage to the celiac artery to cover distal tears increases the clinical success rate but also the risk of spinal cord ischemia [16].
Oversizing in the Proximal Landing Zone
Recommendations on the degree of oversizing differ from manufacturer to manufacturer, ranging between 4 and 32% [17]. While an increased risk of retrograde type A dissection (rTAAD) has been reported in cases of oversizing [ 10%, its occurrence remains rare (1.6%). As proximal landing is in a healthy undissected aortic segment, it is our practice to apply standard oversizing of 10-20%. Another risk factor regarding the incidence of rTAAD after TEVAR is the landing zone; rTAAD is 2.7% in zone 2, 1.0% in zone 3 and 1% in zone 4 [17].
A. Standard TEVAR
Various stent grafts have been used in the last decade for the treatment of cTBAD (Table 2).
Valiant Navion TM (Medtronic Ave, Inc, Santa Rosa, Calif) is a lower-profile evolution of the company's Valiant TM thoracic stent graft. The Valiant system has been assessed in the Virtue Registry [18] which was a prospective, non-randomized, multicenter European Clinical Registry. The principle clinical findings suggested that TEVAR was able to provide good protection from aorticrelated death in the midterm, but with a high rate of aortic reintervention [18].
RELAY Pro (Bolton Medical, Sunrise Florida, USA) has lower profile and improved pushability and visibility compared to the previous Relay Plus stent graft system, which was assessed in RESTORE and RESTORE II studies [19,20], showing safety and effectiveness in patients with types A or B acute or chronic aortic dissections in terms of survival and low morbidity.
The Zenith TX2 Dissection Endovascular Graft (Cook Medical, Bjaeverskov, Denmark) with Pro-Form (Fig. 1) is a one-piece tubular endovascular graft that for acute and chronic aortic dissections. The Zenith Dissection Endovascular Stent is an uncovered large diameter selfexpanding stent and may be used as a distal extension in order to expand the TL [21].
The GORE Ò (W.L. Gore & Associates, Flagstaff, AZ, USA) conformable TAG Ò Thoracic Endoprosthesis was assessed in a multicenter clinical trial of TEVAR in the descending thoracic aorta [22]. This study confirmed treatment advantages for TEVAR when compared with literature-based results of open repair in terms of survival [22]. Recently, the GORE TAG Conformable Thoracic Stent Graft with active control system has been approved; the active control system allows bending the proximal part of the stent graft during deployment in order to minimize the ''bird peak'' phenomenon.
B. Extending the landing zone proximal to LSA 1.
Custom-made devices
There is rarely sufficient seal zone distally to the left subclavian artery (LSA) in cTBAD, which frequently requires LSA-coverage and LSA-debranching. A proximal landing zone length of 20 mm is desired, although shorter landing zones are tolerable in Ishimaru zone 2 may as long as it is non-dissected. Cervical debranching or a chimney graft for the left common carotid artery (LCCA) can further extend the landing zone to the level of the innominate artery (IA). However, it is not recommended to land in the ascending aorta due to the risk of rTAAD.
The Valiant Mona LSA Thoracic Stent Graft System (Medtronic, Santa Rosa, Calif) consists of a main stent graft (MSG) and a branch stent graft (BSG) designed to maintain LSA patency when implanted in zone 2 of the aortic arch [24]. However, there are insufficient data for the safety and efficacy of this device in patients with TBAD as it has been assessed in aneurysms [23].
The W. L. Gore Thoracic Branch Endoprosthesis (W.L. Gore & Associates, Flagstaff, AZ, USA) is a single-branch device designed for either zone 0 or zone 2 deployment. There are also insufficient data for the safety and efficacy of this device in patients with TBAD, although it has been assessed in aneurysms [24].
The Cook Zenith (Cook Medical, Bjaeverskov, Denmark) fenestrated arch graft is a custom-made device (CMD) and may contain up to one scallop and one fenestration for perfusion of aortic arch vessels landing in zones 0-2 ( Fig. 2). The delivery system is precurved and uses diameter reducing ties and a spiralizing wire on the central cannula to ensure rotational control as well as a preloaded catheter for fenestrations to allow a through-and-through wire from left brachial access to safely align the fenestration to the target vessel.
The Relay stent graft from Bolton (Bolton Medical, Sunrise Florida, USA) can be used as a CMD with a proximal scallop in order to remain flow to aortic branches when deployed in zone 1 or 2 [25].
Similarly, scalloped or fenestrated physician-modified endovascular grafts (PMEGs) for zone 2 TEVAR may be used even in patients with cTBAD. Trubert et al. [26] showed that scalloped or single-fenestrated PMEGs for the LSA appear to be durable and safe in the midterm. Combined with low periprocedural morbidity and mortality, these results suggest that this approach can be considered as an off-label alternative to extend proximal seal to zone 2 for TEVAR [26]. The best-known devices for branched TEVAR such as the Cook Zenith inner-branched arch endograft and the Terumo Aortic Relay double-branch endoprosthesis are neither approved for commercial use. However, the Endospan Nexus aortic archstent graft has gained CE mark recently [27], which is an ePTFE off-the-shelf system for endovascular treatment of pathologies extending or involving the aortic arch. Not being commercially available does not necessarily mean that these grafts are not
Chimney TEVAR
Chimney technique has mainly been used in urgent cases, when CMDs were not available or surgical revascularization was not amendable in patients with aortic aneurysms. However, evidence from Bosier et al. [29] and Mangialardi et al. [30] reported that Chimney TEVAR techniques can be also used with good outcomes in patients with TBAD.
Whether aortic arch vessels in endovascular technique are managed by debranching, chimney technique or branched/fenestrated endografts depend widely on availability of devices, technique and experience. Patient factors and anatomical considerations come into play, so that the authors' recommendation for dedicated fenestrated/branched endografts as a first choice remains a personal choice.
C. Management of the distal zone of TBAD Besides coverage of the proximal entry tear, TEVAR may seal further distal entries along its stent graft length reducing false lumen perfusion. Further distal endovascular repair using endovascular techniques is required in patients, who do not have a sealing option in the false lumen of the descending thoracic aorta due to diameter or in patients, who develop aneurysms of the abdominal aorta that exceed the recommended treatment threshold. False lumen thrombosis depends on the length of coverage that may be extended to the celiac artery and the origin of segmental arteries arising from the false lumen [31,32]. Another important issue regarding the management of distal zone of TBAD is the stent graft-induced new entry (SINE) which is defined as a ''new tear caused by the stent graft itself, excluding those created by natural disease progression or any iatrogenic injury from endovascular manipulation [33]. SINE has been increasingly being observed after TEVAR with incidence reaching up to 25%, especially for TBAD (Fig. 3). Distal SINE could develop into a patent false lumen with subsequent aneurysmal expansion and possible rupture. The most important risk factor for distal SINE appears to be excessive oversizing of the distal stent graft relative to the smaller true lumen that may result up to [ 60% in comparison with the distal true lumen.
Fenestrated/Branched Endovascular Aortic Aneurysm Repair
Fenestrated and branched endovascular aortic repair (F/ B-EVAR) may be required in patients, who do not have a sealing option in the false lumen of the descending thoracic aorta due to diameter or in patients, who develop aneurysms of the abdominal aorta that exceed the recommended treatment threshold.
Kitagawa et al. [34] showed that F/B-EVAR is a feasible option for patients with cTBAD in order to treat false lumen back flow and abdominal aortic dilatation. Recently, Oikonomou et al. [35] reported the midterm outcomes of patients treated with F/B-TEVAR for postdissection TAAA. They showed that this approach is feasible and associated with low peri-operative mortality and peri-operative morbidity. Recently, our group reported excellent technical success rate of F/BEVAR for the treatment of postdissection aneurysm and favorable 1-year outcomes in terms of mean aneurysm diameters decrease and high false lumen thrombosis rate (92%) [36] (Fig. 4).
Provisional Extension To Induce Complete Attachment Technique (PETTICOAT)
The PETTICOAT technique consists of TEVAR with proximal tear coverage combined with a distal bare metal stent in order to reinforce the TL without covering side branches [37]. This technique was described in acute Fig. 4 Fenestrated endovascular aneurysm repair (F-EVAR) in a chronic type B aortic dissection; A Intraoperatively, the device has been orientated according to the circle signs from CT fusion system that shows the target vessels (CT: coeliac trunk, SMA: superior mesenteric artery; RRA: right renal artery; LRA: left renal artery). B The postoperative computed tomography angiography of this patient treated with thoracic stent graft, carotid subclavian bypass (arrow) for the proximal part and F-EVAR for the distal part of the disease TBAD in order to achieve favorable remodeling during follow-up [38,39]. Recently, Kazimierczak et al. [40] used this technique in a limited series of patients with cTBAD reporting favorable results when PETTICOAT was combined with covered stents in the iliac arteries. However, the remodeling capacity of a chronic dissection is limited and additional uncovered stents crossing vital reno-visceral side branches may complicate treatment with F/B-EVAR.
Streamliner Multilayer Flow Modulator (SMFM)
A less widespread device is the Streamliner Multilayer Flow Modulator (SMFM: Cardiatis, Isnes, Belgium), which is a self-expandable braided stent interconnected in layers permitting a porosity of * 65%. This technology is supposed to promote thrombus formation in the aneurysm sac while maintaining the blood perfusion into the involved branches [41,42]. A recent global registry highlighted that the SMFM may be an option for management of aortic dissection [42]. Other studies suggest that the proposed treatment mechanism of SMFM may not be effective in aneurysmal disease [41].
D. False lumen occlusion techniques
Complete false lumen thrombosis is only achieved in almost half of the patients after standard TEVAR by covering the proximal part of the dissected aorta [38]. Studies have suggested that thrombosis of the false lumen may be an independent predictor of no further growth [43], while the false lumen patency may be an independent factor of poor survival in cTBAD [44]. Flow to and pressurization of the FL are thought to contribute to further aneurysmal dilatation and rupture [45].
A variation in solid and liquid endovascular materials has been used to embolize the false lumen with varying success since Loubert et al. first published their report in 2003 [46,47]. Techniques using more dedicated materials manufactured as custom-made devices (CMD) for false lumen occlusion are the candy-plug technique and the Knickerbocker technique [48,49].
Candy plug technique
Since we [48] described the candy-plug technique, in 2013 several designs of candy-plug have been used as CMD from Cook (Cook Medical, Bjaeverskov, Denmark). For candy-plug placement, the FL should preferably be catheterized at the level of iliac arteries and over an extra stiff Lunderquist wire the candy-plug is placed into the false lumen with the same distal level as the true lumen stent graft proximal to the CA. This technique occludes the FL proximal to the renovisceral segment to preserve flow to reno-visceral arteries while thoracic stent graft is placed into the true lumen to the level of the celiac artery (Fig. 5). In 2017, Rohlffs et al. [50] showed a high technical success rate of 100% and aortic remodeling of 70% in chronic aortic dissection. Recently, the early outcomes of second generation candy-plug (CP II) (Cook Medical, Bjaeverskov, Denmark) (Fig. 6) have been presented, showing that this device reduces the number of procedural steps (self-closing fabric channel that obviates the need for separate occlusion of its center) and offers good seal, with low morbidity (only 2 patients with minor complications out of 14) and mortality (7%) and a high rate of aortic remodeling (88%) [51]. An important issue is the selection of correct size device. For that purpose, the operator measures the largest diameter of the FL 1 cm above the celiac trunk on the preoperative computed tomography scan; the oversize of the CP II diameter should be 10% to 30%. The CP II should be positioned always with distal alignment to the true lumen stentgraft. Our group has now used [ 50 candy-plugs and continued to see promising results (Fig. 7).
The Knickerbocker technique
This technique does not require access of the false lumen. The basis of the technique is to dilate a large diameter stent graft in the middle part of stent graft covered area at the distal descending aorta. A
Coils, plugs, onyx or glue
A systematic review of the literature [47] highlighted that embolization of false lumen even with the combination of coils, plugs, onyx and glue promotes good outcomes in terms of remodeling. Recently, Pellenc et al. [52] demonstrated that embolization of the FL of chronic aortic dissections is technically feasible with a low morbidity rate. The FL thrombosis was observed in the majority of case and promoted favorable thoracic aortic remodeling.
Discussion
Before the endovascular era, E. Stanley Crawford commented on aortic dissection that ''No patient should be considered cured of the disease'' [53]. Since then, development of devices and techniques has allowed treatment of cTBAD with good early-and long-term outcomes. Thus, TEVAR has been an effective treatment strategy in TBAD [54], showing good remodeling of the aorta which has been described as the expansion of true lumen and thrombosis/regression of false lumen induced by successful entry closure with TEVAR. Recently, Watanabe et al. [32] suggested that aortic remodeling after TEVAR is a significant prognostic factor for better long-term results for TBAD. In particular, the interventions to the distal part of the dissection and/or the embolization of FL have led to favorable outcomes with the reduction in aneurysm diameters and the successful false lumen thrombosis [47]. Patent false lumen and aortic diameter themselves have been associated with aortic enlargement [55], while anatomic complexities such as acute aortic curvature and covered side branches were associated with endoleaks [56]. Recently, Sharafuddin et al. [57] introduced a new false lumen-based classification schema for endoleaks occurring after endovascular therapy of type B aortic dissection that may be used in the near future in order to better describe aortic remodeling during follow-up period. An important issue of TEVAR remains the incidence of stroke. LSA coverage has been identified as a risk factor for stroke. Systematic review has reported an overall stroke rate of 7.4% for TEVAR following LSA coverage versus 4.0% in TEVAR performed distal to the LSA with zone 3 or 4 deployment (p \ 0.0001) [58]. In a very recent meta- [59]. However, the rate of local complications after LSA revascularization may be significant leading to higher re-intervention rate and morbidity [60].
Another potential cause of stroke in TEVAR procedures is air embolism which is a potentially underappreciated problem of aortic endografting, especially in the proximal segments of the aorta. The additional use of carbon dioxide should be considered as a standard flush technique for aortic stent grafts, especially in those implanted in proximal aortic segments, to reduce the risk of air embolism and stroke [61,62].
In a recent systematic review of the literature, it was shown that spinal cord ischemia risk remains low in patients treated with endovascular approach for TBAD, particularly in centers with C 40 caseload [63]. There is a thin balance between benefit and harm; thus, more extensive stent graft coverage appears to improve thoracic aortic remodeling after TEVAR; however, the clinician should balance the benefit of extensive stent graft coverage and its related risk of spinal cord ischemia [64].
In a review of the literature, Canaud et al. [65] suggested that whereas distal SINE is relatively frequent, if it does occur, the complication can be generally treated with additional TEVAR with a good outcome, while the main determinant of SINE seems to be excessive distal oversizing. A recent meta-analysis on distal SINE by D'cruz et al. [66] demonstrated that chronic TBAD and an excessive distal oversizing ratio are both positively and independently associated with the incidence of dSINE tears in TBAD. Lortz et al. [67] highlighted that the use of tapered stent grafts might be beneficial for patients with high expected distal oversize, while other physicians suggest a distal to proximal endograft implantation sequence.
Conclusion
During the last decade, many endovascular devices and techniques have been developed in order to treat patients with cTBAD. Complexity and variation in disease as well as the difference in endovascular techniques make it difficult to draw valid conclusions about the place of TEVAR and its preferred technique. The use of reporting standards and randomized controlled trials are warranted to better understand the role of endovascular techniques in cTBAD.
Funding Open access funding enabled and organized by Projekt DEAL.
Compliance with Ethical Standards
Conflict of interest Tilo Kölbel has intellectual property with Cook Medical.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/. | 2020-06-30T15:29:23.194Z | 2020-06-29T00:00:00.000 | {
"year": 2020,
"sha1": "ac8862ae7ad4edc1e2a9758f396c3581705a5c59",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1007/s00270-020-02566-7",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3603e269a421589d1a824f743e02b302cce758e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214639469 | pes2o/s2orc | v3-fos-license | Historical Geography of Antioch ― The Queen of the East : Through Arab Travelers
The travelers who have been travelling around the world by the motivation of trade, journey or mere curiosity, have indited the places they had visited. The Arab travelers could not by pass Antioch, which was a very significant city, in their journey to Anatolia or Damascus. In this study, fifteen handwritten itineraries were reached and studied in order to exhibit Antioch, the queen of East in the Ancient History, through the eyes of Arab travelers. Antioch has always found a special section in travel books with her ramparts and castles, streets, temples, bazaars and landmarks. Many travelers such as AlTabari, Al-Masʿudi, Al-Idrisi, Ibn Batuta, Al-Bakri al Andalousie, Yaqut al-Hamawi, Al-Qazwini and Abd alMu’min al-Bagdhadi have visited the city and wrote down what they have observed. The length and magnificence of the ramparts were the most impressive aspect of the city as well as the weather and the abundance of water, hammams and also the churches as being a religious center. According to the travel books, the travelers agreed that Antioch was one of the most significant and magnificent cities of the era between 9th and 14th centuries in both Rome and Islamic world.
I. INTRODUCTION
Antakya, also known as Antioch, Antioch-on-the-Orontes is the capital of Hatay Province, in southern Turkey. Antakya is located on the banks of the Orontes River. The city is in a valley surrounded by mountains, the Amanos Mountains to the north and Mount Keldag, Mount Habib an-Najjar (ancient Silpius) forming its eastern limits.
Antioch is a substantially old city which has been mentioned through the history of the humankind. Its climate, fertile fields and location being in the intersection of trade routes have made Antioch always a very important city [1]. Being the capital city in Hellenistic Period, being the Queen of East during the Roman Empire, and an important center of Christianity, today Antioch is a city where different ethnic and religious groups can live in peace.
It was the third largest city of the Roman Empire right after Rome and Alexandria and the center of the civilization and the religion. Around 40 AD, Barnabas, Pavlos and Petrus who were the apostles of Jesus, came to Antioch and spread the Christianity from here to all around Roman world [2]. Besides, the term Christian was firstly ever used in this city for the followers of Jesus which makes it more Manuscript received April 9, 2015; revised June 26, 2015. A. Balciogullari is with Çukurova University, Turkey (e-mail: abalci@cu.edu.tr). important religiously [3]. Since it has been a holy place like Jerusalem and Rome for Christian world, The St. Pierre Church was accepted as pilgrimage place by Pope in 1963 [4].
The travel books have been unique written resources since they reveal very special and secret information for the researchers in fields like geography, history and literature. In ancient times, when the communication tools were limited, the travel books served as the means of communication to transact the scientific and cultural developments [5]. Geography which consists of the terms the Earth and Depiction was applied through writing the information on the Earth by its nature [6]. The travel books can be considered as one of the most important of these applications. As being the primary source of information in historical geography studies, the travel books reveal information in different kinds of specifications that cannot be found in the formal historical documents [7].
In the Middle Age, Muslim scientists have continued and developed scientific methods and traditions of previous cultures. In that era, many scientific books were written in the field of geography just like all other fields of science. Many Muslim scientists had traveled from Spain to India and Africa and then to China and Russia and they had observed and described the places they have visited complying with the Geography sense of their times [8]. Arab geographers, travelers, and merchants have written masterpieces that can be used as primary resources in the field of Geography. Especially during 7 th to 14 th centuries, geography was the field of science that Muslim scientists were rather interested [9].
Antioch was founded by one of the generals of Alexander the Great, the king of Syria Selevkos I Nikator in 300 BC, and named as Antioch by him. The city was the third largest city of East Roman Empire and it was mentioned as "Orientis Opicum Pulcrum (The Queen of the East)" in the Latin manuscripts of that period [10]. It is also one of the pilgrimage centers of the world and also hosts a very significant mosaic museum. Today it is the center of social and religious tolerance as well as the most authentic city of Mediterranean culture.
II. METHOD
The purpose of this study is to exhibit the historical geography of Antioch through the eyes of Arab travelers by examining their travel books written in Arabic between the 9 th and the 14 th centuries. For this purpose, the travel books of Arab travelers covering from the 9 th to the 14 th centuries were examined and parts related to Antioch were quoted. A view of Antioch as it was approximately a thousand years ago was tried to be revealed by using various resources.
Since this study was aimed to depict and describe the conditions in the past as it was occurred, the research model was determined as a descriptive survey model. The document study method of qualitative research methods was used in this study. According to Yildirim and Simsek [11]; the document study includes the analysis of written materials about the target case or cases. In the qualitative research, when it is impossible to conduct direct observation or interview or when it is aimed to increase the validity of the research, written or visual materials can be included to the study besides the direct observation and interview.
In order to conduct this study, the facsimile or photographs of the handwritten travel books written in Arabic between the 9 th and 14 th centuries were reached and the sections about Antioch were selected among them in order to gather information about Antioch of at those times. Some of this information was transferred directly.
A. The Foundation of Antioch and Its Ramparts
The city was founded around 300 BC by Seleucus Nicator who was one of the generals of Alexander the Great. According to the ancient resources, with the population of three hundred thousand, Antioch was the third largest city of the Roman Empire and fourth largest city of the world. Seleucus founded the city in the piedmont of Silpius Mountain (Habib al-Najjar Mountain) and beside the Orontes River (Asi) and he named it after his father Antiochus. Selevkos I. Nikator who had founded his kingdom on the vast lands from Syria to India, had also founded 14 cities in the name of Antioch. Today only Antioch subject of this study still bears this name in the world [12].
The ramparts which were built by Seleucus I. Nicator simultaneously with the city, lied through the peak of Silpius (Habib-i Neccar) down to the plains to the river Asi and completely surrounded the city. The length of the ramparts was up to 23.600 meters and there were paths to walk on them. Owing to those paths, it was possible to walk all around the city. There were 360 multi-story squareshaped bastions on these ramparts with a distance of one arrow shot. Each bastion had gates made of iron just like castle gates. These bastions had five stories and whereas the bottom floors were for animals, the top floors were for watchmen and soldiers. Before they were demolished, there were gates in these ramparts that open to various directions (like Alexandria, Aleppo, Defne, Kuseyr). The most important one of those gates was the Bridge Gate which was located on the Asi River and was the only entrance of roads from the north. There was also an internal castle on the highest point of Habib-i Neccar Mountain.
The ramparts, which had been standing with all their magnificiency, had survived many attacks and earthquakes during the 10th, 11th and 12 th centuries, which were described in details and same manner by each Arab traveler that came to Antioch. Most of them revealed very similar information. Al-Masʿudi 1 is the one that gave the name of the founder of the city and revealed details about the city, ramparts, and castle. He stated that [13] "Antiochos has got the city built" and gave us this information about the city and the ramparts."He has got built the most remarkable building that lies to the mountains and plains. The length of the ramparts is 12 miles. The number of bastion is 136. The number of balconies on the ramparts is 24 thousands. He also got built gates for each bastion large enough for the entrance of men and their horses and in addition he also has got built higher towers for each bastion. There were shelters for animals at bottom floors of the bastions and there were places to stay for the men at the top floors. Each bastion also has an iron gate like a separate castle. The marks and remains of those iron gates are still here (that was the year of 332)".
The ramparts of city of Antioch were protected by the mercenaries whose center was in Istanbul. The contract with these mercenaries was renewed each year. Al-Qazwin 2 gave accurate numbers about the ramparts of Antioch [14].
The ramparts have various sections. They have 360 bastions. 4000 soldiers who are controlled by Constantine protect these ramparts. The mission to protect the ramparts is hired out for one year and the contract is renewed for the next year. The ramparts are one of the most interesting constructions of the world as they lie from the river banks to the top of the mountains.
The bastions of the ramparts of Antioch were studied by Al-Qazwini 3 in detail and both the castle and the features of the bastions were indited in details [14].
The perimeter is 12 miles. Each bastion has paths which provide connection both for the city dwellers and also the people working for the castle to its environment. In addition, he has got built a bottom floor for horses and a top floor for horsemen in each bastion. Each bastion looks like a castle and it has an iron gate. At the very top floor, there is a residence for the patriarch. There is no entrance anyway to this castle from outside. The city has a circular shape as it lies as half of it on the plain and half of it on the mountain. There is a huge castle located on an extremely high point up on the mountain which can be seen from a very long distance. This castle prevents the sub to rise on the city earlier. The Sun can be seen in the city only after 2 pm.
Yaqut al-Hamaw 4 stated in his travel book that the ramparts that surrounded Antioch reached to the tops of the mountain Habib Neccar where the city leaned its back and also there was a castle on the top that be seen from a very long distance. In addition, as we learned from Al-Hamavi, there were five entrances on the ramparts of the city. However he did not provide any information about the names or to where these gates open [15].
While giving information about the ramparts of Antioch "The ramparts around the city are one of the most interesting structures of the world. The length of the ramparts is 12 miles. The number of balconies on ramparts and bastions is 24 thousand. There are 360 bastions. Each bastion has paths to be connected with each other and also has rooms for horses and men to stay. There is also connection between the room for horses at the bottom floor and the room for men at the top floor. Each bastion has an iron gate just like a castle".
B. The Magnificence of Antioch, the City of God or the Queen of Cities
Antioch, which was named as "The Queen of the East" in the ancient ages, being as the favorite city of the emperors in the Roman times with its population over hundreds thousand. The famous historian, lived in fourth century, Ammianus Marcellinus stated that "...no city in the world can outcompete this city about the fertility of his lands and also the richness in trade". It is understood that the traveler agreed that Antioch was the most majestic city of the age in 11th or 12 th century. It is also understood that the architectural structures, location, nature, castle, weather, water and fertile lands of Antioch, had impressed the travelers when they had visited Antioch even after the wearing effects of wars and earthquakes. According to Al-Qazwini [14] and Al-Iskandari 9 [20] Antioch was one of the strongest cities of that time. The people of Antioch were aware of the magnificence and significance of their city and they were very proud of it. Al-Qazwin 10 , reflects the beauties of Antioch as; "Antioch is one of the leading cities of Damascus region, around the sea of Rome. Its characteristic features are the freshness of its water, pleasant weather and a chaste city. There are fields and gardens in the when you enter the city." [14].
According to Abd al-Mu'min al-Baghdadi 11 as well, Antioch was one of the most important city of Damascus region [18]. When describing Antioch, Al-Baghdadi used the words "elegance and beauty".
Whereas Al-Tabari 12 stated that Antioch was the best city of the Damascus region [21], Al-Bakri 13 added that Antioch was the most important city in the world, in his own sentences; "In all around the Arab world, there was Antioch 5 Al-Idrisi, pp. 645 6 [22].
Another traveler who emphasized on the chasteness and elegance of the city is Al-Himyari 14 [19]. Al-Himyari reflected his observation briefly as:
It is a magnificent city by the city in Damascus region. Among all Arabs, whatever exists before Damascus was in
Antioch. There is no city like that neither in Islam nor Roman regions. It is located in a generous location and it cannot be found such a chaste city in Damascus region. …. There are gardens and fields, buildings and inns inside the ramparts.
Antioch is very important for Christianity as well. This was also determined by the travelers of that time. Al-Bakri 15 stated that since it was very significant for Christians, it was also called the city of God [22]. He explains as follows.
"Being the city where the Christianity was revealed for the first time and also the throne of Petrus and Simon was located, the Christians called the city "The City of God" or "The City of the King"." Al-Masʿudi 16 [13] also explained this feature of Antioch with similar sentences as;
"Antioch has a lot of interesting buildings and it is a magnificent city. Christians named the city as "The City of God". Also Antioch is called "The City of King" since the Christianity started spreading from here."
Ibn Batuta 17 stated that [17] "Antioch is a majestic and noble city and has many beautiful houses and buildings. The city is also full of trees".
C. The Churches of Antioch
Antioch is the city where the name "Christianity" was used for the very first time and the St.Pierre Chruch in the city is one of the most important historical churches of the Christian world. It is in the recommendation list of UNESCO World Heritage. The church is also accepted as a pilgrimage location for the Christians and every year in 29 th of June, The Catholic Church holds a mass in this church 18 . Although there are just a couple churches still existing in Antioch today, it is understood that during the times that these Arab travelers had visited, there were remarkable number of churches. Antioch is not only the place of churches but also one of the five patriarch centers that the Christians were affiliated to since the 7 th century Al-Idrisi 19 [16]. The others are Rome, Alexandria, Jerusalem and Istanbul. After the fourth century, the fame of Antioch in Christian world was increased and became the second religious center after Rome. It became one of the five great patriarchate for Orthodox Christians.
Al-Qazwini 20 [14] stated the number and the features of these churches as; "There are numerous churches. All of these churches were adorned with gold and silver and constructed with colored glass and grainy marble." Al-Masʿudi 21 mentioned the important churches of Antioch by their names [13];
There is church in Antioch called Pavlus Church. This is Eli Bab Faris church which is also known as Al Beragis
Monastery. Another one is Aşmunit church where prayers and big religious festivals were held, and another one is Barbara Church; there is also Meryem Church which one of the most magnificent church in the world from the aspects of height and strength.
The architectural specifications of the churches were described by Yaqut al-Hamawi 22 as follows [15]; "there are lots of churches made of golden, silver, colorful glass and marble which are architecturally unique." Yaqut al-Hamawi 23 emphasized one of the churches in the Antioch city center [15]. That was not only a church but also an education center. Al-Hamawi described this church and its features as follows [15]; "They sell qisyan (clothing made of flax or silk) in the city center. The building they sell these clothes is located in the place where the son of King was resuscitated. The length of this building is 100 steps and the width is 80 steps. There is a church on this building constructed on the pillars. This building was surrounded with porticos and the governor of the city lives there and also languages were taught."
D. The Abundance of Water and Hammams
Another feature of Antioch that attracted the attention of travelers visited this city was the abundance of water and hammams. The importance of Damascus among Arab people was very obvious. When they were describing a city or telling its beauties, the Arab travelers had always compared it with Damascus. About the abundance of water Al-Masʿudi 24 stated that [13] "The freshness of the water in Antioch can be figured out looking at their livestock." And Al-Himyari 25 said [19]; "The water of Antioch is abundant enough to fill all the streets".
An Arab traveler Al-Idrisi 26 , provided more detail about this [16].
Antioch is a beautiful and fertile city which can even outcompete Damascus. There are water facilities in the city which is sourced from outside and is being distributed to streets, markets, roads, and villas of the city…
In the gardens and fields located in the entrances of the ramparts, there are various kinds of grains and pulses and other fertile products. There is also a river in Antioch which is called Orontez (Asi).
Yaqut al-Hamawi 27 presented more details on this topic as well [15].
Antioch, is a place of abundance with its chaste and beautiful ambiance, pleasant weather, delicious and vitalizing fresh water, and also various kinds of fruits. In his letter to Abu Hasan Hilal bin Muhsin al Sabi about Antioch, Ibn Batlan stated that "We started our journey from Aleppo to Antioch, the distance between them is a day and a night. As we traveled between these two cities, we have never seen a ruined or wrecked building. But the interesting part that the people of this region, grow wheat and barley under the olive trees. It is very easy to communicate with the villagers here and the foreign people coming from outside can stay here very safely.
Another traveler talking about the beauty of water and hammams in Antioch is Al-Qazwini 28 who reflected his thoughts as [14].
"Its hammams are very beautiful and they have the most delicious and the best water in the world, they burn woods obtained from myrtle trees." Like the other travelers, Al-Baghdadi 29 also mentioned about the abundance of water, various fruits and fertility of Antioch [18].
The abundance of water in Antioch led the hammams become very famous. The travelers could use these hammams for free. Al-Masʿudi provided information about the buildings of hammams [13]; "This hammam in Antioch is on the right side of the mosque and built with bricks and rocks. It is a remarkable building. The most important specification of this building is that when the moon rises one a year, it appears on one of the gates of this building.
Yaqut al-Hamawi 30 also mentioned the beauty of hammams as [15]; "The hammams in this city has such a fresh and pleasant taste that it is not possible to find something like that anywhere else in the world. The wood from the myrtle tree is burnt in these hammams.
F. The Trade in Antioch
Besides the architecture, Arab traveler also described the market and products of Antioch. Owing to its location, Antioch is in the center of intersecting roads and also on a very fertile plain. There were a lot of inns which are survived till today. It is understood that they were especially developed in weaving. Abd al-Mu'min al-Baghdadi, satisfied with just telling trade was very wide spread in Antioch [18]. Al-Masʿudi mentioned that many people make their living with weaving of cloth called kısyan; Al-Idrisi 35 ; also mentioned about the weaving as "There are one piece clothing weaved in Antioch which are very strong." [16] However Al-Himyari 36 [19] explained what was exactly this weaving "They produce very durable horse cloths in Antioch." Another traveler; Al-Qazwini 37 who have visited this qisyan market described it in more details [14]; There is qisyan market in the center of the city. There are many people working and making their living in this market. There are also 10 clerks who record the products being sold.
Yaqut al-Hamawi 38 also mentioned about the weaving of qisyan and the place where it was produced and he also added the dimensions of this building [15].
"Qisyan (clothes made of flax or silk) is being sold in the center of the city. The building where they sell these clothes is also the place where the son of the King was resuscitated. The length of this building is 100 steps and the width is 80 steps. There is a clock on one of the walls of this building which works day and night and divides the day to 12 hours. This clock is one of the most interesting clocks in the world."
G. The Periapt of Antioch
In the ancient times, there were various periapts to protect the cities from diseases, troubles, attacks and harmful animals. Being one of the important cities of that era, a periapt was prepared for Antioch to protect from snakes, centipedes and ants. Al-Qazwini 39 [14], mentioned about a periapt that he quotes from Al-Masʿudi's [13] travel book. This periapt was placed in the ramparts of Antioch. However, due to the curiosity of people, this periapt had lost its features; "When someone extends his hand to outside of the ramparts, his hand is covered with insects but then when he pushes back his hand into the ramparts, insects are gone. This was true until they break apart a marble pillar and found a copper box inside and there were some copper insects inside this box. After they break apart this pillar, the situation of the city being free from insects was gone. Today in Antioch there are lots of insect and mice that the cats cannot handle." In the works that has been the subject of this study; the travelers who had visited Antioch usually had a positive impression. Almost all of them emphasized that Antioch was an important city. Antioch had started to lose its magnificence and importance during the Roman Empire gradually. But the traces of that magnificence have not completely disappeared yet.
The ramparts were the most remarkable building for the travelers. Almost all travelers mentioned about the ramparts that surrounded the city and described their features. Besides, the abundance of water and its natural flow in the streets of the city was another attention grabbing aspect of the city. The architecture of the churches and hammams and the freshness of their water are also mentioned as an impressive feature. Burning the woods of myrtle tree in hammams also attracted the attention of travelers.
Another important feature of Antioch that is understood from the travel books is that the people of Antioch were tolerant and open for communication and people felt themselves comfortable and safe in this city.
In the era that above mentioned travelers had visited Antioch, the city had been still preserving its importance for Christians. It is one of the five patriarchs of Orthodox world. As it is understood from the travel books Antioch was the city where weaving and trade was conducted. Although Arab travelers have not provided much information about that, a few of them wrote that though clothes were weaved. | 2019-08-20T06:33:07.962Z | 0001-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "be1e124e116ac2a41be1875363945ac7b7b7f6a1",
"oa_license": null,
"oa_url": "http://www.ijch.net/vol1/008-HS00017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "46bf60975b9d294f0ddcb0be2dd8dde91ee0c8fe",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
264993309 | pes2o/s2orc | v3-fos-license | Bladder paraganglioma: Case report and review of the literature
Pheochromocytoma is a tumor that originates from the chromaffin cells of the adrenal medulla and is responsible for the production of catecholamines. However, when it occurs outside the adrenal glands, it is called a paraganglioma and accounts for 10%-15% of cases. In this report, we present the case of a 27-year-old male patient with a history of hypertension, who presented hematuria and dizziness on urination with a diagnosis of bladder paraganglioma. Contrast-enhanced computed tomography revealed the presence of a bladder tumor. Bladder paraganglioma is a rare condition, and understanding possible imaging findings is crucial to raising suspicion of this diagnosis and expanding our knowledge of this rare disease.
Background
Pheochromocytoma is a tumor that arises from the chromaffin cells of the adrenal medulla and produces catecholamines.Paragangliomas, which are extra-adrenal tumors, account for 10%-15% of cases [1] .Among these, bladder paragangliomas are uncommon, representing approximately 6%-10% of all cases.They do not show a gender predilection, and their clinical manifestations are diverse, ranging from hypertension, ✩ Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
headaches, and palpitations to hematuria.Therefore, it is essential to accurately determine their location to establish appropriate management and care, base a multidisciplinary approach between radiologists, urologists, endocrinologists, and nuclear medicine physicians [2] .In this case, we present a patient who experienced hematuria and vertigo during urination.Further investigations, including urine metanephrine testing, were conducted, and a contrast-enhanced computed tomography (CT) revealed a lobulated bladder mass with a necrotic center.
Case description
A 27-year-old male patient with a history of hypertension under pharmacological treatment presented at a highcomplexity hospital with complaints of gross hematuria and a sensation of dizziness during urination that had been occurring for the past month.Upon initial review of systems, the patient denied experiencing any additional symptoms, constitutional symptoms, or weight loss.The initial physical examination revealed normal vital signs, cardiopulmonary findings, and abdominal parameters, with no tenderness or palpable masses.The neurological examination was normal.
Extension paraclinical tests showed elevated levels of urinary metanephrines.Abdominal CT with contrast revealed a lobulated bladder mass located on the upper wall, with endoluminal extension, avid enhancement with contrast medium, and a necrotic center ( Fig. 1 ).Additionally, the mass exhibited peripheral calcification ( Fig. 2 ) and dilation of vascular struc-tures of the bladder wall and surrounding the tumor ( Fig. 3 ) which measured an average size of 64 × 71 × 56 mm.Besides, an enlarged obturator right node was observed ( Fig. 4 ).No lesions were found on the chest CT.
The patient underwent laparoscopic partial cystectomy, as well as robotic-assisted bilateral obturator and hypogastric pelvic lymphadenectomy.During the initial cystoscopy, a large bladder mass located on the dome of the bladder was identified.The mass appeared erythematous and hypervascularized, with a tendency to bleed upon contact, but it did not involve the urinary meatus.The mass was completely resected without complications.Macroscopically, the tumor displayed a shiny nodular surface with a yellowish-brown color.Microscopic examination revealed polygonal cells with eosinophilic and granular cytoplasm, round nuclei with scattered chromatin, few mitotic figures, and areas of hemorrhage.Immunohistochemical analysis demonstrated overexpression of synaptophysin and chromogranin, displaying a membrane and cytoplasmic pattern.CD56 exhibited membrane pattern reactivity, and S100 showed reactivity in the sustentacular component.The KI67 cell proliferation index was 1%, with no central or confluent necrosis or atypical mitoses, thus confirming the diagnosis of bladder paraganglioma, with a nested growth pattern, without evidence of extension to adipose tissue or lymphovascular invasion, PASS 2 score.
Discussion
Bladder paraganglioma is a rare nonepithelial neuroendocrine neoplasm that arises from autonomic nervous tissue, specifically chromaffin cells [3] .It represents a small percentage (0.05%-0.06%) of all bladder tumors and typically occurs between the ages of 43 and 50, with no gender predominance [2] .Pheochromocytomas in the genitourinary tract most commonly occur in the urinary bladder (79.2%), followed by the urethra (12.7%), pelvis (4.9%), and ureter (3.2%) [4] .Bladder paragangliomas can present with a wide range of symptoms, and up to 83% of cases may exhibit functional symptoms related to catecholamine secretion [ 1 ,5 ].Typical clinical manifestations include flushing, paroxysmal hypertension, palpitations, tremors, micturition syncope, and hematuria, while other diverse symptoms such as paresthesias and dyspnea have also been reported [ 4 ,5 ].
Due to delayed medical consultation, patients with bladder paragangliomas may experience advanced manifestations resulting from catecholamine secretion, such as syncope, retinopathy, or intracranial hemorrhage [ 4 ,6 ].This can lead to a delay in diagnosis and treatment.
Bladder paragangliomas can be located within the muscle layer or the bladder itself, with 45% found to be submucosal and 42% intramural.They typically have an average size of 2.5 cm, with larger intramural tumors appearing spherical or lobulated and smaller ones exhibiting a more homogeneous appearance [7] .
The diagnostic imaging modalities for bladder paraganglioma include ultrasound, where the tumors appear as hypoechoic lesions (60%) with an obtuse angle in relation to the bladder wall, and increased blood flow on color Doppler [ 1 ,5 ].CT has a sensitivity of 91% and typically shows an hyperdense, rounded, homogeneous lesion with arterial phase enhancement and prominent peritumoral vessels in larger tumors, calcification is present in 10% of cases and necrosis is rare [2] .These tumors are highly vascularized and may occasionally display calcifications [7] .Magnetic resonance imaging (MRI) is more sensitive than CT and provides excellent soft tissue contrast for localizing the tumor within the bladder layers.Bladder paragangliomas typically appear hyperintense on T1 and T2 weighted images compared to the muscularis propria.Diffusion restriction is often observed, and larger tumors may exhibit a "salt and pepper" appearance [ 1 ,2 ,8 ].
Another imaging method used is nuclear medicine images with tracer uptake and PET CT.Gallium 68 DOTATATE is a very useful marker to identify metastatic disease with 18 F-FDG and 18 F-DOPA, it has greater sensitivity and specificity for abdominopelvic paragangliomas [2] , additionally, an available alternative is 123-iodine-metaiodobenzylguanidine (123 I-MIBG) SPECT/CT should be reserved for patients considered for radioiodine therapy [2] .
Around 10%-26% of bladder paragangliomas have a malignant presentation, which is defined by lymph node involvement or distant metastasis [ 5 ,9 ].Differentiating malignant from benign pheochromocytoma has been a challenge based on histological characteristics, which is why the PASS score (Pheochromocytoma of the Adrenal gland Scaled Score) was developed, and a score > 4 is considered potentially malignant tumors [10] .In terms of staging, bladder paragangliomas are classified as T2 when they involve the bladder wall, as T3 indicating extension into the perivesical fat, and as T4 representing invasion of adjacent organs or muscles.N1 refers to the presence of pelvic lymph node involvement, there is no T1 stage for bladder paragangliomas.Metastasis is considered when nonadrenal and nonparasympathetic chain tissues are affected, with common sites being lymph nodes, bone, liver, and lungs [2] .
Histologically, these tumors are composed of principal cells arranged in cords or nests, surrounded by sustentacular cells and capillary networks.Immunohistochemical staining for neuroendocrine markers such as synaptophysin and CD56 is usually positive, along with positive staining for chromogranin A. Other diagnoses involving catecholaminesynthesizing-enzymes conditions can be ruled out [2] .
Treatment options for paragangliomas include catecholamine blockade, surgery, chemotherapy, and radiation therapy, depending on the stage of the tumor.Surgical management, such as transurethral resection of the prostate (TURP) or partial cystectomy [ 2 ,11 ], is typically chosen for localized or locally advanced tumors.Prior medication administration is necessary to prevent hypertensive crises, arrhythmias, or complications during surgery.Nonsurgical approaches may involve chemotherapy, and follow-up is recommended at 3 months after finishing medical treatment for patients with elevated biochemical markers, nonfunctioning tumors, or those without a hormonal profile prior to surgery.Local recurrence, even after margin resection, occurs in approximately 15% of cases, highlighting the importance of regular imaging follow-up, ideally with MRI, every 1-2 years [2] .
In conclusion, although bladder paragangliomas are rare, it is crucial to recognize and suspect this condition.Accurate diagnosis, utilization of characteristic imaging techniques, and appropriate planning of interventions are essential to avoid unnecessary urgent surgeries or misdiagnoses that are not compatible with the nature of the tumor.
Ethical approval
This article was approved by the hospital ethics committee.
Patient consent
Informed consent was obtained from the patient for the publication of data from her clinical history and the necessary images.
Fig. 1 .
Fig. 1. -(A) Computed tomography (CT).Coronal section in a soft tissue window showing a mass with soft tissue density (white arrow) dependent on the wall of the bladder dome, endophytic growth, and lobulated contours.(B) Contrast-enhanced computed tomography (CT).Coronal section in a soft tissue window showing avid enhancement of the mass (black arrow) and a necrotic center (yellow arrow). | 2023-11-04T15:11:33.372Z | 2023-11-02T00:00:00.000 | {
"year": 2023,
"sha1": "057e4e76da90ab67ad9e67ddccc2b3f5326b5bed",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.radcr.2023.10.021",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "becf9dd9e8d678bd4fd4bdcaf4a282f372e15e53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125946109 | pes2o/s2orc | v3-fos-license | Quantum Brownian Motion of a Magnetic Skyrmion
Within a microscopic theory, we study the quantum Brownian motion of a skyrmion in a magnetic insulator coupled to a bath of magnon-like quantum excitations. The intrinsic skyrmion-bath coupling gives rise to damping terms for the skyrmion center-of-mass, which remain finite down to zero temperature due to the quantum nature of the magnon bath. We show that the quantum version of the fluctuation-dissipation theorem acquires a non-trivial temperature dependence. As a consequence, the skyrmion mean square displacement is finite at zero temperature and has a fast thermal activation that scales quadratically with temperature, contrary to the linear increase predicted by the classical phenomenological theory. The effects of an external oscillating drive which couples directly on the magnon bath are investigated. We generalize the standard quantum theory of dissipation and we show explicitly that additional time-dependent dissipation terms are generated by the external drive. From these we emphasize a friction and a topological charge renormalization term, which are absent in the static limit. The skyrmion response function inherits the time periodicity of the driving field and it is thus enhanced and lowered over a driving cycle. Finally, we provide a generalized version of the nonequilibrium fluctuation-dissipation theorem valid for weakly driven baths.
I. INTRODUCTION
The impact of the bath fluctuations on the dynamics of open nonequilibrium systems is commonly treated by nonlinear stochastic differential equations for the macrovariables, known as generalized Langevin equations [1]. Within this description, the thermal bath exerts random fluctuating forces on the central system which eventually undergoes a Brownian propagation [2][3][4]. The system-bath coupling gives rise to non-Markovian memory damping terms and random forces with a colored correlation [5]. In principle, both the noise and the damping terms are determined by the system-bath interaction, a relation which is manifested in the well known fluctuation-dissipation theorem [6].
Quantum stochastic dynamics are present in a variety of physical systems, ranging from quantum optics [7], transport processes in Josephson junctions [8], coherence effects and macroscopic quantum tunnelling in condensed matter physics [9] and many more, which form a large body of current active research. Here we focus on the stochastic dynamics of particle-like magnetic skyrmions, which similar to particle-like solitonic textures in quantum superfluids [10,11], experience dissipative and stochastic forces from their environmental surroundings.
Classically, the dynamics of a magnetic skyrmion is governed by the Landau-Lifshitz-Gilbert (LLG) equation [29,30], which incorporates dissipation mechanisms by a phenomenological local in time Ohmic friction term, known as Gilbert damping. At finite temperatures, the skyrmion is subjected to thermal fluctuations that will render its propagation stochastic, similarly to the Brownian motion of a particle. The conventional assumption for the fluctuating field acting on magnetic particles [31] as well as skyrmions [32][33][34][35][36][37][38], is that it is a Gaussian stochastic process with a white noise correlation function proportional to the phenomenological Gilbert damping.
In a magnetic insulator and at low enough temperature, the skyrmion dynamics is dominated by the unavoidable coupling of its center-of-mass with the magnetic excitations generated by the skyrmion motion itself. Magnetic excitations are defined as fluctuations around the classical skyrmion solution through a consistent separation between collective (center-of-mass) and intrinsic (magnetic excitations) degrees of freedom. A description of the dynamics of one-dimensional (1D) domain walls [39] and 2D magnetic skyrmions [40] in a magnetic insulator beyond the classical framework, demonstrated that the dissipation arising from the magnetic excitations is generally non-Markovian with a damping kernel that is nonlocal in time. The quantum nature of the magnetic bath, naturally incorporated within this approach, becomes evident in the nontrivial temperature T dependence of the damping kernel which remains finite even for vanishingly small T . A theory of dissipation which ignores quantum effects based on the classical phenomenological LLG equation is expected to be inade-arXiv:1904.09215v1 [cond-mat.mes-hall] 19 Apr 2019 quate for atomic-size skyrmions observed in state-of-theart experiments carried out at low temperatures of a few K [21,41,42].
In this paper we develop a microscopic description of the skyrmion stochastic dynamics at finite temperature using the functional Keldysh formalism for dissipative quantum systems [43][44][45], as well as the Faddeev-Popov collective coordinate approach [39,46] to promote the skyrmion center-of-mass to a dynamic quantity. We then arrive at a Langevin equation of motion, which includes a non-Markovian damping kernel and a stochastic field with a colored autocorrelation function, as a result of the skyrmion-magnon bath coupling. We demonstrate that the quantum version of the fluctuation-dissipation theorem acquires a non-trivial temperature dependence. As an important consequence, the skyrmion mean square displacement is finite at T = 0, and has a fast thermal activation being proportional to T 2 for finite temperatures, in contrast to the linear T -increase obtained within the usual phenomenological theory [37].
We also investigate the effects of an external oscillating drive which unavoidably couples with the magnon bath in an analogous fashion to many physical situations where the driving of the bath results in important contributions to the dynamical response of the entire nanoscale system [47][48][49][50]. We demonstrate explicitly that additional time-periodic dissipative terms are generated by the driving field, in particular a friction and a topological charge renormalization term, which are both absent in the static limit. As a consequence, the skyrmion response function inherits the time periodicity of the drive, and it is thus enhanced and lowered over a driving cycle. Since the magnetic excitations are driven out of equilibrium, a generalization of the fluctuation-dissipation theorem should not be expected in general. Quite remarkably, however, in the weak driving regime, we find a nonequilibrium fluctuation-dissipation relation, which reduces to the equilibrium one in the static limit.
For the efficient manipulation of skyrmions at the nanoscale it is important to understand how random processes contribute to the skyrmion propagation, especially in the presence of time-periodic microwave fields which appear to be among the most efficient ways to induce translational motion of skyrmions in magnetic insulators [51][52][53]. The microscopic understanding of the stochastic skyrmion motion becomes also important in view of proposed devices for stochastic computing based on skyrmions [54,55].
The structure of the paper is as follows. In Sec. II we present a detailed derivation of the Langevin equation for the skyrmion collective coordinate using the functional Keldysh formalism in the presence of a timedependent magnetic field. In Sec. III we evaluate and discuss the damping kernel, while in Sec. IV we investigate the skyrmion response function. The quantum fluctuation-dissipation theorem and its generalized nonequilibrium version in the presence of the oscillating field are presented in Sec. V, together with a discussion on the skyrmion mean square displacement. Our main conclusions are summarized in Sec. VI, while some technical details are deferred to four Appendices.
II. LANGEVIN EQUATION
The purpose of this section is to present a derivation of the quantum Langevin equation for the skyrmion centerof-mass coordinate, by making use of a functional integral approach for the magnetic degrees of freedom at finite but low temperatures, combined with the Keldysh technique to include the effects of a time-dependent oscillating magnetic field. To begin with, we note that the essential features of the dynamics of a normalized magnetization field in spherical parametrization m = [sin Θ cos Φ, sin Θ sin Φ, cos Θ] defined in the 2D space, are described by a partition function of the form Z = DΦDΠe iS . Here, the functional integration is over all configurations and the field Π = cos Θ is canonically conjugate to Φ. The Euclidean action S for a thin magnetic insulator in physical units of spacer and timet is given by whereΦ = ∂tΦ denotes the real-time derivative of field Φ. The first term in Eq. (2) describes the dynamics and is known as the Wess-Zumino or Berry phase term [39], while the translationally symmetric energy term, supports skyrmion configurations with nontrivial topological number Q 0 as metastable solutions due to the presence of the Dzyaloshinskii-Moriya (DM) interaction [56,57] of strength D. Here,r = (x,ỹ), S is the magnitude of the spin, N A is the number of magnetic layers along the perpendicularz axis and α is the lattice spacing. The strength of the exchange interaction J, the easy axis anisotropy K, and finally D are measured in units of energy while the strength of the magnetic field H is given in units of Tesla (T). It is convenient to introduce dimensionless variables as r = (D/Jα)r, t = D 2t /J, and T = k BT J/D 2 , whereT is the temperature measured in Kelvin (K). Also, k B is the Boltzmann constant and throughout this work we use = 1. The energy functional in reduced units is given by The classical skyrmion field, denoted as Φ 0 (r) and Π 0 (r), is found by minimizing the energy functional F(m) [58,59]. We then arrive at the following rotationally symmetric solution in polar coordinates r = (ρ cos φ, ρ sin φ) given by Φ 0 (r) = φ + π/2, while the skyrmion profile depends only on the radial coordinate Θ 0 (r) = Θ(ρ). In Fig. 1 we depict the magnetization profile of the skyrmion Θ 0 (ρ) for various values of the magnetic field h, using the trial func- . The parameter λ, which denotes the skyrmion size, and ∆ 0 are calculated by fitting the approximate function to the one obtained numerically. This profile has a topological number Q 0 = −1. We next address the stochastic dynamics of the skyrmion described by the classical fields Φ 0 and Θ 0 in contact with the bath of magnetic excitations at finite temperature driven by an external magnetic field that oscillates in time. This is achieved by first promoting the skyrmion center-of-mass to a dynamical variable R(t), then treating the magnetic excitations as quantum fluctuations around the classical field, and finally obtaining an effective functional [1,39,40,60] by integrating out the magnon degrees of freedom. At the same time, the real-time dynamics of the external field as well as the stochastic effects of the magnon bath at finite T are captured by replacing the time integration by an integration over the Keldysh contour which consists of two branches. The upper branch extends from t = −∞ to t = +∞, while the lower branch extends backwards from t = ∞ to t = −∞ [45]. It is worth mentioning that the formalism derived below is applicable to any general energy functional F as long as it satisfies the specified requirements.
We define two components of the fields as Φ + ≡ Φ(t + i0) and Φ − ≡ Φ(t − i0), that reside on the upper and the lower parts of the time contour, respectively. Similarly, we define fields Π ± = Π(t ± i0). Moreover, quantization of the path integral variables implies the following form: where η and ϕ are the quantum fluctuations and the coordinate R(t) is energy independent owing to the assumed translational invariance of the system. We therefore expect the existence of a pair of zero modes Y i , with i = x, y, which need to be excluded from the functional integral to avoid overcounting degrees of freedom by imposing proper gauge fixing conditions. We use the following convenient spinor notation, and we also define linear transformations of the fields by performing a Keldysh rotation of the form Here, χ c (R c ) and χ q (R q ) denote the classical and quantum fluctuations (coordinate), respectively. Moreover, we introduce the field ζ = χc χq in order to obtain the action in a more compact form. Implementing all the above transformations in the action of Eq. (2), taking into account that time integration is now performed over the upper and lower time branches denoted by the symbol s = ±1, the partition function becomes Z = DR c DR q e iS clZ , wherẽ Here, F s i = drχ † s σ z Y i is the gauge condition and J s F (t, t ) = dF s (t)/dR(t ) is the Jacobian matrix of the coordinate transformation and is treated as additional perturbation to the N A -term in the action. The classical part of the effective action reads where b(t) denotes a time-dependent external field, d = (J/D) 2 and we have also neglected an overall constant from the configuration energy of the classical skyrmion . The fluctuation-dependent part of the Keldysh action takes the form describes the coupling of the external field with the magnons and it is treated as a time-dependent perturbation to the magnon Hamiltonian. The magnetic fluctuations appear as solutions of the eigenvalue problem (EVP) HΨ n = ε n σ z Ψ n , solved in detail in Appendix D. Moreover, we define K s = −iSσ zṘ i s Γ i , assuming that repeated indices, i, j = x, y, are summed over and we also introduce the abbreviation The circular multiplication sign in Eq. (9) implies convolution of the form (10) Note that Eq. 9 assumes the absence of potentials that break translational symmetry which will generate additional classical dissipation terms [40] with interesting consequences on the skyrmion dynamics in confined geometries [53]. A considerable simplification is also provided in the limit where the skyrmion configuration energy S 0 is much larger than the energy S B = d r,t b(t) · m(r, t) added by the external applied field, S 0 S B . In this case, m(Φ 0 , Π 0 ) is a good approximation for the skyrmion configuration, while terms linear in the fluctuations are negligibly small and do not appear in Eq. (9).
To proceed we note that the functionalZ is an integral with a Gaussian form if we neglect terms O [1] in N A originating from the Jacobian determinant det(J F ). Thus, after integration,Z reduces tõ , and the prime notation on the determinant and the trace excludes the zero modes. By performing an expansion retaining terms up to the second order inṘ and first one in V , the effective action for the classical and quantum coordinate is where ∆G 0 = G 0Ṽ G 0 . The advantage of the Keldysh rotation is that the operator G 0 is identified with the Green function of the fluctuations where are the retarded and advanced Green functions given in real time as provided that T ± time orders in chronological/antichronological order. We parametrize the Keldysh Green function as and in thermal equilibrium is given by F (ω) = coth(βω/2), with β = 1/T . The represenation in frequency space ω is obtained by the usual Fourier transformation g(t) = (1/2π) The standard way to calculate the quasiclassical equation of motion for the skyrmion coordinate R c is to calculate the saddle point of the action (12) by extremizing with respect to the quantum coordinate R q [63]. We note that terms proportional to K q K c describe temperaturedependent dissipation due to magnon modes, while we show explicitly that terms proportional to K q K q give rise to random forces. To distinguish between the contributions from these terms we rewrite the effective action of Eq. (12) as S eff = S cl + S dis + S st , where the dissipative part reads where Similarly, the stochastic part is given by The function C ij (t, t ) is found by evaluating the trace appearing in Eq. (16) with the eigenstates Ψ ν (r, t) of the operator G and is given explicitly in Appendix A. To demonstrate that S st indeed gives rise to random fluctuating forces, we introduce auxiliary fields ξ i via a Hubbard-Stratonovich transformation, Minimizing the r.h.s. of Eq.(17) with respect to R j q results in a random force term ξ j in the equation of motion characterized by an ensemble average of the form where By minimizing the effective action S eff , we obtain the dynamical Langevin equation for the classical coordinate withQ 0 = −4πN A QSd, ij is the Levi-Civita tensor and the time of preparation of the initial state is at t → −∞.
The first term in Eq. (19) is a Magnus force acting on the skyrmion and being proportional to the winding number [61,62], while the nonlocal (in time) damping kernel is given by where with a, b = R, A, K.
The damping kernel of Eq. (20) describes the dissipation which originates from the coupling of the skyrmion to the quantum bath of magnetic excitations and has an explicit temperature dependence through the Keldysh Green function G K . Note that an external force acting on the skyrmion is absent, as a direct consequence of the spatial uniformity assumed for the external magnetic field. The translational motion of the skyrmion would be induced by a spatially dependent magnetic field, for example a magnetic field gradient [64,65], and its effect has been studied in Ref. 53. Here, the external time-periodic field acts on the quantum bath of magnons and is naturally incorporated in the stochastic Langevin equation of Eq. (19). This allow us to generalize the quantum theory of dissipation to account for the effects of the driven bath in several observables related to the skyrmion dynamics.
III. DAMPING KERNEL
Our next task is to analyze the damping kernel of Eq. (20) in the case of a driven bath. In Appendix B we obtain the real-time damping kernel γ 0 ij (t − t ) in the absence of a drive, and thus establish agreement with earlier results derived in Matsubara space using the imaginary-time functional integral approach [40]. Note that, although the Laplace transform γ 0 ij (z) is frequency dependent, we are usually interested in the long-time asymptotic behavior of the skyrmion dynamics which is in turn determined by the low frequency part of the kernel. This low frequency regime is specified by the condition |ω| ε gap , with ω = (iz), ε gap = 2κ + h, being the lowest magnon gap, while at the same time the temperature is limited to the quantum regime T ε gap . Thus, under the assumptions specified above the diagonal damping kernel acquires the super-Ohmic power law behavior γ 0 . Following the usual terminology [1], Ohmic friction is described by a damping term of the form zγ(z) ∝ z s with s = 1, while for s > 1 we call it super-Ohmic. The T -dependent mass is given by, withF νν = F (ε ν ) − F (ε ν ) and F (ε ν ) = coth(βε ν /2).
Here, the sum runs over the quantum number ν = {q = ±1, n}, where the index q distinguishes between particle states (q = 1), solutions of the eigenvalue problem HΨ n = ε q n σ z Ψ n with positive eigenfrequency ε 1 n = +ε n , Note that the expression of Eq. (22) is symmetric under the exchange of indices ν and ν , and that there is no singularity for ε ν = ε ν since lim εν →ε ν F ν ν /(ε ν − ε ν ) = β/2 sinh 2 (βε ν /2). The quantum nature of the magnon bath is evident from the non-vanishing M(T ) in the T → 0 limit. In order to emphasize that M(0) is finite and that it is independent of the effective spin N A S, contrary to the magnus force proportional toQ 0 = −4πQN A Sd, we refer to the mass of Eq. (22) as quantum mass. This terminology allows us to distinguish M(T ) from the semiclassical mass already calculated in Ref. 40 in the presence of spatial confinement, which scales linearly with N A S. The off-diagonal damping kernel has a super-Ohmic low-frequency power law γ xy (z) ∝ z 2 , irrelevant for the skyrmion dynamics at times t ε −1 gap . The T -dependence of the quantum mass M(T ) is depicted in Fig. 2 With this preparation, we are now in position to generalize the damping kernel in the presence of the external driving field turned on at time t = t 0 , b(t) = b 0 Θ(t − t 0 ) cos(ω ext t)(sin ϕ ext , 0, cos ϕ ext ), tilted in the xzplane with the angle ϕ ext away from the z-axis. In the presence of b(t), the magnons are subjected to the potential V (r, t) = b 0 Θ(t − t 0 ) cos(ω ext t)V (r), where V (r) is given in Eq. (D6). The damping kernel of Eq. (20) acquires an additional correction due to the time-dependent field, γ ji (t, t ) = γ 0 The function g ji ext (t) carries information on the external drive, g ji ext (t) = Θ(t − t 0 )b 0 cos(ω ext t − | ji |π/2), while W ji (t) carries information about the magnon modes, where w ν1ν2 (t) = Θ(t)F ν1ν2 sin[(ε ν1 − ε ν2 )t]. We also introduced the matrix elements . We note that the triple summation over the magnon quantum numbers originates from the fact that the external field induces a finite overlap, V ν1ν2 = 0 for ν 1 = ν 2 . Note that Eq. (24) is valid only away from the resonance condition ω ext = ε ν2 − ε ν1 , under the assumption that the external potential V induces only a small overlap 0 < |V ν1ν2 | 1 between magnon modes carrying approximately the same energy. Thus, the energy differences are restricted as 0 ≤ |ε ν1 − ε ν2 | ≤ ε d and it also holds that ε d ω ext . In Fourier space with frequency ω, the equation of motion given in Eq. (19) takes the form with F (t, ω) satisfying ∞ −∞ dωe −iωt F (t, ω) = 0, and where γ ji (t, ω) = γ 0 ji (ω) + ∆γ ji (t, ω). It appears convenient to calculate ∆γ ji (t, ω) in Laplace space z with ω = (iz), The correction to the damping kernel, ∆γ ji (t, z), describes the effects of the driven magnon bath on the skyrmion and is treated as a perturbation to γ 0 ji (z). Here, W ji (z) is the Laplace transform of W ji (t) given in Eq. (24). In Eq. (26) we assume that the time t 0 coincides with the preparation time of the initial state, i.e. t 0 → −∞, and we therefore neglect boundary terms that depend on t 0 . A Taylor expansion around the origin, γ ji (t, z) γ ji (t, 0) + z∂ z γ ji (t, z)| z=0 + O(z 2 ), valid for frequencies ω ε gap , provides the low frequency powerlaw behavior of the damping kernel. For the diagonal part we find ∆γ xx (t, z) D(T ) sin(ω ext t) + z δM (T ) cos(ω ext t) , (28) and similarly the off-diagonal corrections are ∆γ yx (t, z) δQ(T ) cos(ω ext t) + z G(T ) sin(ω ext t) . (29) Explicit expressions of the T -dependent coefficients appearing in Eqs. (28) and (29) are given in Appendix C. As expected, in the static limit ω ext → 0, all the terms in Eqs. (28) and (29), except the mass renormalization, vanish. In the special case of ε d ω ext ε gap , where 0 ≤ |ε ν2 −ε ν1 | ≤ ε d is the energy difference induced by the external potential V , we find the simplified expressions D(T ) = −ω extWii , δM (T ) =W ii , δQ(T ) = ω extWyx , and G(T ) =W yx . The coefficientW ji is given bȳ whereF νν is given after Eq. Due to the symmetries of the matrix elements we note the relations ∆γ xx (t, z) = ∆γ yy (t, z) and ∆γ xy (t, z) = −∆γ yx (t, z), thus the term δQ(T ) cos(ω ext t) can be considered as a temperature-and time-dependent correction to the topological chargeQ 0 , induced by the external drive. Similarly, the quantum mass acquires the correction δM (T ) cos(ω ext t). The low-frequency linear dependence of the quantity zγ ji (t, z) signals a super-Ohmic to Ohmic crossover behavior, with measurable consequences on the skyrmion trajectory [53]. More specifically, the ac driving of the magnon bath at resonance displaces the skyrmion from its equilibrium position and results in a unidirectional helical propagation.
IV. RESPONSE FUNCTION
In this section, we calculate the equilibrium skyrmion response function, which is then generalized to the nonequilibrium case of a driven bath of magnons. The linear response of the skyrmion to the fluctuating force ξ i (t) is encoded in the equilibrium response function where the elements in Laplace space are implies that a free topological particle withQ = 0 exhibits a different dynamical behavior than the one withQ 0 = 0. In particular, we note that the static susceptibility χ 0 is infinite for a freely moving and finite for a confined Brownian particle [1]. For example, χ 0 = 1/ω 2 0 for a damped harmonic oscillator of frequency ω 0 [1]. Therefore, we see that χ 0 is finite due to the non-trivialQ 0 , and as expected, χ 0 diverges forQ 0 = 0. Moreover, the low frequency expansion for the off-diagonal response function is χ 0 yx (z) (1/Q 0 z) + O(z), and in this caseQ 0 plays the role of a velocity-dependent friction.
The response of the skyrmion position R(t) when the external drive b(t) is turned on is encoded in the response function χ ij (t, t ) defined through the relation In an analogous fashion to the decomposition of the damping kernel given in Eq. (23), we generalize the re- sponse function as χ ij (t, t ) = χ 0 ij (t − t ) + δχ ij (t, t ). Starting from the equation of motion given in Eq. (19) and using Eq. (33), we solve for the function δχ ij (t, ω), defined as δχ ij (t, t ) = (1/2π) dωe −iω(t−t ) δχ ij (t, ω), retaining first order terms in b 0 . In Laplace space, by performing an expansion of the full response function χ ij (t, z) = χ 0 ij (z) + δχ ij (t, z) around z = 0 and keeping leading order terms in z, we find with δχ = [−2M(T )δQ(T ) +Q 0 δM (T )]/Q 3 0 , and similarly where δχ = [−2M(T )D(T ) −Q 0 G(T )]/Q 3 0 . We observe that a new friction term emerges for the diagonal response function and a new static susceptibility term for the off-diagonal one. The characteristic behavior of the response functions χ ji (t, z) is illustrated in Figs. 3-4. To begin with, an anticipated result is depicted in the colored surfaces plotted in Figs. 3(a) and 4(a), namely that χ ji (t, z) are periodic functions of time t, with a period T ext = 2π/ω ext = 19.63 (1.3 ns). The z-dependence of χ ji (t, z) carries information on the memory effects that originate from the skyrmion-magnon bath coupling, including the additional dissipative terms generated by the oscillating driving field. Thus we notice that the diagonal χ ii (t, z) depends on the friction coefficient D(T ), while the off-diagonal χ yx (t, z) has a dependence on the topological charge renormalization δQ(T ).
V. FLUCTUATION-DISSIPATION THEOREM
In this section we turn our attention to the derivation of the fluctuation-dissipation (FD) theorem, for a skyrmion in contact to a bath of magnons at equilibrium. An extension of the FD relation is also derived for a nonequilibrium bath of magnons which is weakly driven by an oscillating magnetic field, a relation which reduces to the FD theorem in the static limit. We also calculate the time and temperature dependence of the skyrmion mean square displacement (MSD).
The FD theorem relates equilibrium thermal fluctuations and dissipative transport coefficients [1,4]. In the absence of an external drive, the Fourier transform C 0 ij (ω) of the quantum stochastic force correlation function defined through Eq. (16) is related to the damping kernel γ 0 ij (ω) by the relation, Eq. (36) is the quantum mechanical version of the FD theorem with the observation that quantum effects enter not only through the usual ω coth(βω/2) term, but additionally through the non-trivial ∝ coth(βε ν /2) dependence of the damping kernel γ ij (ω).
We now turn to the extension of the FD relation of Eq. (36) in the presence of an external field b(t). In general, the stochastic fluctuations of reservoirs driven out of equilibrium do not necessarily relate to their dissipative properties, and a generalization of the FD theorem should not be expected, except for some special cases [66,67]. Following the same methodology as in Sec. III, we decompose the random force autocorrelation function as follows, where the stochastic function ∆C ji (t, t ) satisfies . (38) Here, U ji (t − t ) carries information about the magnon bath and is given by where We remind the reader that the damping kernel ∆γ ji (t, t ) , with W ji (t) given in Eq. (24). The generalization of the FD theorem is found to be independent of the form of the external drive and is expressed as a relation between the functions W ji (t) and U ji (t) in Fourier space, The non-equilibrium FD relation Eq. (40) is valid within first order perturbation theory with respect to the amplitude of the driving field, however we expect it will serve as a basis for future investigations of the effects of time-dependent driving fields beyond first-order perturbation theory. In the special case of a static external field ω ext → 0, the FD theorem in equilibrium, Eq. (36), is recovered trivially, . We now focus on the temperature dependence of the r.h.s. of Eq. (36), which we expect to give rise to a finite zero-temperature mean squared displacement (MSD) of the skyrmion position. This motivates us to consider the correlation function S ij (t, t ) = 1 2 [R i (t) − R j (t )] 2 , where . . . denotes ensemble average, and where R = 0. From Eqs. (32) and (18) it follows that in the special case of b(t) = 0, the diagonal MSD S ii (t) = S ii (t − t ) The inset depicts the λ-dependence of the RMSD at low temperatures below 2 K. For a given temperatureT , the RMSD has a local minimum at a critical radius λcr(T ) which signals a crossover from short-time dynamical effects to long-time renormalization: For λ < λcr, the RMSD decreases as 1/λ, while for λ > λcr it scales linearly with λ. reduces to ] is the symmetrized autocorrelation function. Eq. (42) contains several contributions, of which we retain only the leading terms iñ Q 0 , under the assumptionQ 0 1, to further simplify the MSD to First we focus on the temperature dependence of the root mean square displacement (RMSD) S ii (t), which is summarized in Fig. 5. As a result of the quantum magnetic excitations, the RMSD atT = 0, defined as S Q = √ S ii (T = 0), remains finite. The dependence of S Q on the skyrmion size λ, illustrated in the inset of Fig. 5, implies that quantum fluctuations become important for very small skyrmions of a few lattice sites, while their effect on the RMSD becomes negligible for larger skyrmions. We should emphasize that in this work we consider a classical skyrmion coupled to a bath of quantum magnetic excitations, and disregard quantum effects of the center-of-mass, which could increase the value of S Q further and make it experimentally more accessible. Such quantum effects are beyond the scope of this paper, and we leave it as a motivation for further studies. Another important feature of Fig. 5 is the fast linear thermal activation for temperaturesT > 4 K, i.e., S ii (t) 0.14T α/K. Such a behavior results from the nontrivial temperature dependence of the fluctuationdissipation theorem Eq. (36) and stands in contrast to the √ T dependence obtained in a classical description [37]. For a skyrmion with a radius 10α, the RMSD is (3.3/N A S) percentage of its radius atT = 1.5 K, (10.8/N A S) percentage atT = 5 K, and (32.5/N A S) percentage atT = 15 K.
Further results are shown in Fig. 6, where we plot the dependence of the RMSD on the skyrmion size λ. We note that there is a critical radius λ cr (T ) which signals the interplay between long-time renormalization and short-time dynamical effects. For λ < λ cr , the RMSD is inversely proportional to the skyrmion size, as expected for a massive particle with a mass proportional to the area λ 2 . Indeed, the time-dependent damping kernel γ 0 ij (t) of Eq. (B4) is renormalized to the effective mass of Eq. (2) in the long-time scale approximation. On the contrary, for λ > λ cr , shorter time scale dynamical information becomes dominant and the RMSD scales linearly with λ. Analogous results are obtained for very low temperatures below 2 K, illustrated in the inset of Fig. 6.
Several conclusions can be drawn also from the time dependence of S ii (t) as illustrated in Fig. 7. At short timest 1, we find a quadratic dependence, S ii (t) S 0t 2 , which resembles the ballistic regime of the Brownian motion of a particle [68]. The constant S 0 is found from Eq. 43 under the replacement sin[(ε ν − ε ν )t/2] → (ε ν − ε ν )t/2, while for the specific parameters plotted in Fig. 7 we find S 0 = 4.2 × 10 5 . Such a ballistic mo-tion is a direct consequence of the memory effects which dominate the dynamics at short time scales. At longer times, the memory effects become negligible and S ii (t) saturates at a value which can be estimated from replacing sin 2 [(ε ν − ε ν )t/2] → 1/2 in Eq. (43).
The ballistic regime of the Brownian motion for a classical particle with a large inertia mass of the order of 10 −14 kg has been experimentally observed for short time scales of the inertia-dominated regime of µs [69,70]. Here, the ballistic motion we predict for the quantum dymanics of a magnetic skyrmion, with an inertial mass of 0.2 × 10 −28 kg at T = 580 mK, is restricted to the immeasurably small femtosecond regime, which, however, is comparable to the duration of ultrafast light-induced heat pulses needed to write and erase magnetic skyrmions [71]. We anticipate that the ballistic motion for a confined skyrmion with an inertial mass of about 10 −26 kg [40], could possibly take place within the experimentally accessible nanosecond regime. It suffices to mention that the classical dissipation is dominated by the contribution of some low-lying localized modes with energy ε 0 in the GHz regime [72]. Thus, the quadratic short-time expansion is valid up to times ε −1 0 , i.e. the ballistic regime extends in the nanosecond regime. We also note that our predictions significantly deviate from the classical results for the mean squared displacement which, in the latter case, increases linearly with time [37], a result that directly follows from the assumption of a phenomenological thermal white noise which scales proportional to the Gilbert damping parameter.
VI. CONCLUSIONS
In this work, we consider the stochastic dynamics of a magnetic skyrmion in contact with a dissipative bath of magnons in the presence of a time-periodic external field, which directly couples to the magnon bath. We develop a microscopic derivation of the Langevin equation of motion based on a quantum field theory approach which combines the functional Keldysh and the collective coordinate formalism. The non-Markovian damping kernel is explicitly related to the colored autocorrelation function of the stochastic fluctuating fields, through the quantum mechanical version of the fluctuation-dissipation theorem. Emphasis is given to the nontrivial temperature dependence of the dynamical properties of the system, in terms of the fundamental response and correlation functions. Contrary to the prediction of the classical theory, the damping kernel and the mass remain finite at vanishingly small temperatures, due to the quantum nature of the bath considered in this work. This will give rise to a finite mean squared displacement at T → 0, which increases with temperature as T 2 , a result that deviates from the phenomenological prediction of a linear increase.
We rigorously treat the effects of an external drive on the bath, and therefore on the skyrmion-bath coupling, and we generalize the theory of quantum dissi-pative response. The bath is dynamically engineered out-of-equilibrium and through its interaction with the skyrmion gives rise to dissipation and random forces that incorporate the bath's dynamical activity. The magnitude of these effects is illustrated in the diagonal and off-diagonal response functions, which acquire an additional time-periodicity inherited by the external drive. In addition, a super-Ohmic to Ohmic crossover behavior is signalled by new friction and topological charge renormalization terms, similar to the effects predicted within a microscopic theory of classical dissipation with measurable consequences for the skyrmion path [53]. We note, however, that, in contrast to Ref. 53, where the external drive couples to a well-pronounced bath mode, here we do not consider resonance effects.
Within our path integral formulation, we are able to establish a generalization of the fluctuation-dissipation theorem to the nonequilibrium case for weakly driven magnetic excitations. The spectral characteristics of the bath modes of the damping kernel are related to the ones of the stochastic correlation function, irrespectively of the form of the external drive. Noteworthy, our results apply to similar mesoscopic systems embedded in an driven bath. Advances in the theoretical understanding of skyrmion dynamics out of equilibrium is expected to have an impact on similar particle-like objects such as solitonic textures in quantum superfluids and domain walls in ferromagnets. Our nonequilibrium formalism of skyrmion dynamics can serve as a basis for future experimental investigations as well as theoretical studies that go beyond first order perturbation theory and beyond the slow dynamics of the GHz regime.
VII. ACKNOWLEDGMENTS
This work was supported by the Swiss National Science Foundation (Switzerland) and the NCCR QSIT. | 2019-04-19T14:43:58.000Z | 2019-04-19T00:00:00.000 | {
"year": 2019,
"sha1": "4e51d24a86aec0f47a037ca445489e2e1054c05e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.09215",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "22510364345634e48e12c343edeffc7667a09914",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259196516 | pes2o/s2orc | v3-fos-license | Proteomic analysis to identification of hypoxia related markers in spinal tuberculosis: a study based on weighted gene co-expression network analysis and machine learning
Objective This article aims at exploring the role of hypoxia-related genes and immune cells in spinal tuberculosis and tuberculosis involving other organs. Methods In this study, label-free quantitative proteomics analysis was performed on the intervertebral discs (fibrous cartilaginous tissues) obtained from five spinal tuberculosis (TB) patients. Key proteins associated with hypoxia were identified using molecular complex detection (MCODE), weighted gene co-expression network analysis(WGCNA), least absolute shrinkage and selection operator (LASSO), and support vector machine recursive feature Elimination (SVM-REF) methods, and their diagnostic and predictive values were assessed. Immune cell correlation analysis was then performed using the Single Sample Gene Set Enrichment Analysis (ssGSEA) method. In addition, a pharmaco-transcriptomic analysis was also performed to identify targets for treatment. Results The three genes, namely proteasome 20 S subunit beta 9 (PSMB9), signal transducer and activator of transcription 1 (STAT1), and transporter 1 (TAP1), were identified in the present study. The expression of these genes was found to be particularly high in patients with spinal TB and other extrapulmonary TB, as well as in TB and multidrug-resistant TB (p-value < 0.05). They revealed high diagnostic and predictive values and were closely related to the expression of multiple immune cells (p-value < 0.05). It was inferred that the expression of PSMB9, STAT 1, and TAP1 could be regulated by different medicinal chemicals. Conclusion PSMB9, STAT1, and TAP1, might play a key role in the pathogenesis of TB, including spinal TB, and the protein product of the genes can be served as diagnostic markers and potential therapeutic target for TB. Supplementary Information The online version contains supplementary material available at 10.1186/s12920-023-01566-z.
Introduction
According to World Health Organization (WHO) data, Mycobacterium tuberculosis infects about 1/4 of the global population, of which approximately 10 million develop active TB and 1.6 million die from it [1][2][3]. TB is one of the leading causes of death worldwide and poses a serious threat to global public health security [4]. A lung infection caused by M. tuberculosis leads to pulmonary TB. Extrapulmonary TB (EPTB) occurs when M. tuberculosis infects the spine, lymph nodes, kidney, liver, intestine, joints, brain, and other organs outside the lung [5]. The most common extrapulmonary form of TB is spinal TB, which accounts for half of all bone TB cases [6][7][8]. Spinal TB could seriously destroy bone and scoliosis and affect neurological function. It has a high refractory, disability, and recurrence rate, which seriously affects the patient's quality of life [9,10]. Studies have revealed that patients infected with M. tuberculosis develop active TB when their immune system is imbalanced [11], and the incidence rate of EPTB is higher [12]. Granulomas containing large numbers of immune cells, including macrophages, monocytes, T cells, and B cells, form at sites of M. tuberculosis infection [13], suggesting that immune cell dysregulation might play a crucial role in TB pathogenesis.
Current research reveals that hypoxia plays a key role in pathological or physiological immune responses. In different immune processes and microenvironments, hypoxia affects inflammation and immunity differently. In pathological conditions, such as chronic inflammation, infection, and tissue ischemia, pathological hypoxia induces dysregulation of immune cells leading to disease progression [14]. In a study, Allison N. Bucşet al. found that Erdman, a strain of M. tuberculosis, exhibited greater virulence under hypoxic conditions. Hypoxia may substantially impact bacterial persistence, reactivation, and treatment efficiency [15. A regulatory factor called hypoxia-inducible factor (HIF) plays an essential role in regulating the transcription of immune effector cells. As a result of tissue hypoxia, the HIF pathway is activated [16,17]. When the body is infected with bacteria, the bacterial oxygen consumption, formation of oxygenimpermeable biofilms, and inflammation-related hypoxia activate HIF and affect the function of immune cells [18][19][20]. In addition, a study shows that hypoxia can increase the drug resistance of Pseudomonas aeruginosa [21].
In this study, We utilized a label-free protein profiling method to analyze the diseased intervertebral disks of patients with spinal TB. We utilized WGCNA and machine learning methods to find key hypoxia-related genes. Besides, various diagnostic and predictive models were constructed to evaluate the diagnostic and predictive values of these key hypoxia-related genes in TB. We also used ssGSEA to identify immune cells associated with spinal tuberculosis and validated the results with data from routine blood tests. In addition, a pharmacotranscriptomic analysis was also performed.
Tissue samples collection
We collected the intervertebral disks from ten patients who underwent spinal surgery at the First Affiliated Hospital of Guangxi Medical University from 2018 to 2020. Five patients with spinal TB were included in the experimental group, and five patients with thoracolumbar disk herniation were included in the control group. There was no evidence of autoimmune diseases, spinal tumors, or other infectious diseases in any of the patients. This study was conducted following the Helsinki Declaration, which passed the ethical review, and obtained informed consent from all patients.
Label-free quantitative proteomic analysis
The specific steps and processes of the Label-Free Quantitative Proteomic Analysis are as described in our previous research [22], as follows:
Sample lysis
The RIPA solution must be prepared right before use and stored in an ice bath to keep it cool. The mixture consists of RIPA lysis buffer, Protease inhibitor cocktail, and 1 mM PMSF (Phenylmethylsulfonyl fluoride). For each 100 mg sample tissue, 1,000 µl of RIPA solution should be thoroughly mixed and homogenized, with sonication at 4 °C for 5 min. Afterwards, centrifugation should be done at 14,000 g for 15 min at 4 °C. The supernatant should then be transferred to a new EP tube and stored in an ice bath.
BCA assay
The BCA (Bicin-choninic Acid) Protein Assay Kit instructions indicate that reagent A and reagent B should be mixed at a ratio of 50:1, and added in 160 µl/well to a 96-well plate (with five wells for a calibration curve and one well for a blank). Then 10 µl of each sample (diluted 5-10 times) or calibration standard protein (at five different concentrations) should be added to the respective wells. The plates should be shaken and incubated at 37 °C for 30 min, after which they should be read at 562 nm wavelength. Using the calibration curve, the protein concentration of each sample can be determined.
Acetone precipitation
For every sample, 100 µg of protein was taken and diluted to 1 mg/ml in RIPA buffer. Then, 4-6 times the volume of pre-chilled acetone was mixed into the EP tube and shaken in an ice bath for 30 min or left to incubate at -20 °C for the entire night. Following centrifugation at a speed of 10,000 g and 4 °C, the supernatant was carefully discarded, taking care not to disturb the pellet. The sample was then washed twice using 200 µl of cold 80% acetone.
Resuspend protein for tryptic digest
Two hundred µl of 1% SDC and 100 mM ABC (ammonium bicarbonate) were added to the EP tube, mixed with a vortex, and spun down. The EP tube was then subjected to sonication for 5 ~ 30 min in a water bath to dissolve the proteins. Five mmol of TCEP (tris 2-carboxyethyl phosphine) was then added to the EP tube and mixed at 55 °C for 10 min. After the sample was cooled down to room temperature (RT), ten mmol of IAA (iodoacetamide) was added in. The EP tube was then incubated in the dark for 15 min. Trypsin (sequence grade) was resuspended in a resuspension buffer to 0.5 µg/µl and incubated at RT for 5 min. A trypsin solution (protein:trypsin = 50:1) was then added to the EP tube. The mixture was well blended and spun down, then incubated at 37 °C with a thermomixer for approximately 8 h or overnight.
Cleaning up of SDC
After 2% TFA (Trifluoroacetic Acid, HPLC) was added to the EP tube, SDC was precipitated. After being centrifuged at the highest speed, the supernatant was transferred to a new EP tube. N * 100 µl of 2% TFA was added to the pellet to extract the co-precipitated peptides. This step was repeated twice. The three supernatants were then combined. After being centrifuged at the highest speed for 10-20 min, the supernatant was carefully transferred to a new EP tube, leaving the peptide samples.
Peptide desalting for Base-RP fractionation
Buffer A (0.1% FA, H2O, 2% ACN) and Buffer B (0.1% FA, 70% ACN) were prepared. The C18 (3 M) column was then equilibrated using 500 µl of ACN. This was followed by washing it out with 500 µl of 0.1% FA twice. The peptide solution was then added to the column. After low speed centrifugation, liquid (A) was collected. This process was repeated once more, with peptide eluted using 400 µl of 70% ACN and liquid (A) collected. Desalting was performed once again with liquid (A). The two liquids were then combined and dried with a vacuum at either 4 °C or room temperature. Buffer A was then added to re-dissolve the peptide to 1 buffer g/buffer L for LC-MS/MS detection or storage at − 80 °C.
Separation via Nano-UPLC and LC-MS/MS
Separate 2 µg peptides from each sample and detect them using nano UPLC coupled with Q-Exactive mass spectrometry. Analyze using a reverse-phase column and a mobile phase composed of solvent A (0.1% FA, 2% ACN) and solvent B (80% ACN, 0.1% FA). Samples are directly loaded onto the chromatographic column by an autosampler and then separated by the column. Analyze peptides for 240 min/sample by LC-MS/MS, using positive ion detection mode with a scanning range of 350-1600 m/z and DDA acquisition method. Use standard parameters for resolution, AGC, maximum IT, NCE, isolation window, and dynamic exclusion time.
MaxQuant analysis and LFQ
MaxQuant (1.6.1.0) processed raw MS data using the UNIPROT database. LFQ with trypsin, oxidation [M], and acetyl [protein N-term] modifications were used. Carbamidomethyl [C] was set as the fixed modification (maximum of three variable modifications). Peptides without variable modifications were used for quantification, with an FDR of 0.01. Ten samples were standardized, and missing values were imputed using Perseus software. Protein groups with fewer non-missing values than biological replicates were removed. LFQ quantification results were log-transformed.
Identification of differentially expressed proteins
To identify differentially expressed proteins between spinal TB and controls, we performed differential analysis of the normalized quantitative results using the "limma" package. | logfc | > 1 and p-value < 0.05 were set as the conditions for screening differentially expressed proteins [23,24]. To illustrate these differential proteins more clearly, we created a volcano plot and cluster heat map using the "impulse" and "pheatmap" package. All operations were carried out on the R language programming software (version 4.1.1).
GO/KEGG and DO enrichment analyses
To further explore the biological functions of these differential proteins, we used the "clusterprofiler" package for gene ontology (GO) and Kyoto encyclopedia of genes and genomes (KEGG) enrichment analyses [25][26][27]. In addition, we also performed a disease ontology (DO) analysis on these differential proteins to reveal the relationship between spinal TB and other diseases [28]. To improve the accuracy of the results, we set the screening conditions as p-value < 0.05 and q-value < 0.05. Finally, the top 10 GO terms, KEGG pathway, and DO terms with the most significant enrichment were visualized.
Weighted gene co-expression network analysis
Weighted gene co-expression network analysis (WGCNA) is a system biology method used to describe the gene association pattern between different samples. It can be used to identify the gene set with highly synergistic changes and identify the gene set with the strongest correlation with the disease according to the interconnection of the gene set and the association between a gene set and phenotype. It is widely used in the research of diseases and other traits and gene association studies [29]. In this study, we employed the "WGCNA" package to cluster all proteins, automatically select the best soft threshold, and finally obtain each protein module related to the disease.
Construction of a PPI network of hypoxia-related proteins
In this study, we investigated the role of hypoxia-related proteins in spinal TB by intersecting the two most disease-related modules in WGCNA with a set of all hypoxia-related genes in humans downloaded from the Molecular Signatures Database (version 7.5.1) and differential proteins [29]. Later, the results were used to construct a protein-protein interaction network through the STRING database (version 11.5) and visualized through Cytoscape (version 3.9.0). Finally, a key module in the network was retrieved through the MCODE plugin in Cytoscape software [30].
Identification of key hypoxia-related proteins and prediction model construction
In order to investigate the transcriptome expression level of hypoxia-related proteins closely related to spinal TB in TB, the GSE144127 dataset, GSE83456 dataset, and GSE147690 dataset related to TB were downloaded from the GEO database (https://www.ncbi.nlm.nih.gov/geo/). The mRNA expression levels of these hypoxia-related proteins in spinal TB and other extrapulmonary TB from the GSE144127 dataset were extracted for differential analysis. Finally, 11 hypoxia-related genes with consistent changes at the transcriptional and protein levels were obtained. We utilized two machine learning methods, LASSO and SVM-REF, to screen these 11 hypoxiarelated genes further. LASSO is a regression analysis method that performs variable selection and regularization while fitting a generalized linear model and selects the best variable by the smallestλvalue [31]. This process is achieved through the "glmnet" package. SVM-REF is a powerful feature selection algorithm that continuously eliminates the redundancy between features and finds the optimal feature subset by repeatedly building the model [32]. This process is implemented by the "e1071", "kernlab" and "caret" packages. Subsequently, we integrated the genes from the LASSO, SVM-REF, and MCODE modules to obtain three important genes. Finally, a diagnostic model was developed using five machine learning techniques, including logistic regression [33], Bayesian logistic regression [34], decision tree [35], random forest [36], and extreme gradient boosting [37], to evaluate the diagnostic value of these three genes in TB disease.
Immune infiltration analysis
We obtained 28 immune cells and their marker genes from a prior study, used ssGSEA to assess the protein expression matrix through the "GSVA" package, and scored each sample according to the expression of the marker genes to determine the immune cell infiltration level [31]. Finally, using the "limma" and "corrplot" packages, the difference and correlation analyses were performed.
Blood routine data validation
To further validate the differential analysis of immune cell infiltration findings, we collected lymphocytes, monocytes, and platelets during routine blood examinations from 162 normal patients and 237 patients with spinal TB for statistical analysis. This study adhered to the Declaration of Helsinki guidelines and received approval from the hospital ethics committee.
Pharmaco-transcriptomic analysis
To provide new solutions for treating multidrug-resistant TB, we conducted a pharmaco-transcriptomic analysis utilizing the DrugBank database (version 5.1.9). Drug-Bank database integrates the chemical structure and pharmacological action of drugs, as well as the sequence, structure, and physiological pathway of drug action targets [38]. It is an extensive, public web database. Finally, Cytoscape was used to obtain and visualize the effect of drug molecule metabolism on the up-or down-regulation of genes.
Immunohistochemistry
In this study, 5 cases of intervertebral disc tissue resected during surgery for spinal tuberculosis diagnosed at First Affiliated Clinical Hospital of Guangxi Medical University were taken as a test group, and 5 cases of intervertebral disc tissue resected during surgery for lumbar intervertebral protrusion were taken as the control group. The differences in expression of PSMB9, STAT1, and TAP1 between experimental and control groups were compared by immunohistochemistry. After separating the disc tissue, we immersed it in formalin solution and preserved it within 10 min. We then made immunohistochemical sections and done staining after laboratory operations such as wax sealing, sectioning, antigen repair, antibody hybridization, color development, and tissue sealing. The specimens were observed under the inverted microscope, and the experimental and control group images were collected, respectively. We used Image J software to evaluate the positive rate of all immunohistochemical images and used an independent samples t-test to statistically analyze the positive rate of PSMB9, STAT1, and TAP1 in the experimental group and the control group, respectively, through IBM SPSS Statistics 26.0.
Differentially expressed proteins
Following label-free quantitative proteomic analysis, we obtained 1965 quantifiable proteins. The quantitative repeatability analysis between samples revealed that the quantitative experiment had good sensitivity and reliability (Fig. 1A). According to the screening conditions, we obtained 350 differentially expressed proteins, which could be clearly distinguished by volcano plot (Fig. 1B) and cluster heat map (Fig. 1C). Furthermore, the cluster heat map also indicated that these differential proteins could distinguish well between the spinal TB and control groups.
GO/KEGG and DO enrichment analyses
Through GO enrichment analysis, we found that these differentially expressed proteins are primarily involved in cytoplasmic translation, generation of precursor metabolites and energy, electron transport chain, cellular respiration, oxidation of organic compounds to produce energy, aerobic respiration, collagen fibril organization, and other processes ( Fig. 2A). KEGG pathway analysis showed that these differentially expressed proteins were primarily related to a ribosome, coronavirus disease (COVID-19), chemical carcinogenesis-reactive oxygen species, phagosome, oxidative phosphorylation, neutrophil extracellular trap formation, citrate cycle (TCA cycle) and other pathways (Fig. 2B). DO analysis found that these differentially expressed proteins were not only linked to pulmonary disease but also linked to osteoarthritis, bacterial infectious disease, atherosclerosis, arteriosclerotic cardiovascular disease, phagocyte bactericidal dysfunction, and other diseases. This provides novel insights into the etiology and comorbidities of spinal TB (Fig. 2C).
WGCNA and identification of key modules
WGCNA could cluster genes with similar expression patterns, analyze the correlation between modules and specific traits or phenotypes, and identify the molecular markers that are strongly correlated with diseases. It is an advanced method frequently employed for bioinformatics analysis. Following analysis, we found that the two modules, "salmon" and "green, " were highly correlated with spinal TB (Fig. 3A-I), and the gene expression in most modules also showed a significant correlation ( Fig. 3J-P).
PPI network of hypoxia-related proteins
We intersected the proteins in the two modules of "salmon" and "green" in WGCNA with 3147 hypoxiarelated genes and 350 differential proteins screened by our study. Finally, 36 hypoxia-related proteins were obtained in total (Fig. 4A). We constructed a protein-protein interaction network using the string database with 22 points and 27 edges (Fig. 4B). Through the MCODE plugin, we found that there is only one key module in the network (Fig. 4C).
The key hypoxia-related proteins and prediction models
To further explore the role of hypoxia-related genes in TB, we analyzed the GSE144127 datasets. We found that the transcriptional levels of 11 genes in these 36 hypoxiarelated genes were consistent with the protein expression The entire WGCNA process, from sample clustering to correlation analysis, looking for the genes in the modules most associated with the disease levels (10 up-regulation and 1 down-regulation). The difference in transcriptional level was significant, in extrapulmonary TB and the control group (Fig. 5A). These 11 genes were further screened in extrapulmonary TB and control groups using LASSO and SVM-REF machine learning (Fig. 5B-D) and then intersected with the key modules extracted by the MCODE plugin. Finally, three genes, PSMB9, STAT1, and TAP1, were obtained (Fig. 5E). The GSE83456 dataset revealed significant differences in these three genes between the TB and control groups (Fig. 5F-H). In addition, in the GSE144127 dataset, the AUC of PSMB9, STAT1, and TAP1 in extrapulmonary TB and the control group were 0.781, 0.804, and 0.788 (Fig. 5I). In the GSE83456 dataset, the AUC of PSMB9, STAT1, and TAP1 genes in the TB and control groups were as high as 0.934, 0.961, and 0.966 (Fig. 5J). All these three genes have high diagnostic value for TB and may play a crucial role in the pathogenesis of TB. Finally, the five machine learning methods of logistic regression, Bayesian logistic regression, decision tree, random forest, and extreme gradient boosting were used to build a prediction model based on these three genes. In the GSE144127 dataset, the accuracies in extrapulmonary TB and the control group were 0.764, 0.764, 0.758, 0.701 and 0.783, respectively (Fig. 5K). In the GSE83456 dataset, the accuracies of pulmonary TB and the control group were 0.822, 0.844, 0.822, 0.8, and 0.8 (Fig. 5L). Comparatively, we can observe that the machine learning method extreme gradient boosting has the highest prediction accuracy for extrapulmonary TB, which is 0.783, and Bayesian logistic regression has the highest prediction accuracy for pulmonary TB, which is 0.844.
Immune infiltration analysis
By ssGSEA analysis, we obtained 25 types of infiltrating immune cells in all protein samples (Fig. 6A). Through the correlation heat map, we can observe that activated dendritic cells with gamma delta T cells possess a strong positive correlation, r = 0.73, and gamma delta T cells with immature B cells also possess a strong positive correlation, r = 0.77. Monocytes and most lymphocytes also have a more significant correlation (Fig. 6B). Differential analysis showed that most immune cells were highly infiltrated in the disease group, and the activated dendritic cells, gamma delta T cells, and immaturity B cells were significantly different between the spinal TB group and control group (p-value < 0.05) (Fig. 6C).
Correlation of hypoxia-related genes PSMB9, STAT1, and TAP1 with immune cells
Following correlation analysis (Fig. 7), we found that PSMB9, STAT1, and TAP1 significantly correlated with activated dendritic cells, gamma delta T cells, immature B cells, and neutrophils. In addition, STAT1 and TAP1 were also significantly positively correlated with central memory CD4 T cells and macrophages while negatively correlated with Type 1 T helper cells. PSMB9 and STAT1 had the strongest and most significant correlation with gamma delta T cells, while TAP1 had the strongest and most significant correlation with immature B cells. This suggests that these key genes and immune cells might play an important role in the pathogenesis of TB, including spinal TB (Fig. 7A-U).
Blood routine data validation
Through a statistical analysis of the blood routine examination of 162 normal patients and 237 patients with spinal TB, we found that the monocytes and platelets in the spinal TB group were higher in comparison to the normal control group. In contrast, the lymphocytes in the normal control group were higher in comparison to the spinal TB group, and the difference was statistically significant (p-value < 0.05) (Fig. 8A-C). According to our immune cell infiltration results, based on ssGSEA analysis, the monocytes and macrophages had higher infiltration levels in the disease group. This finding was proven to be accurate through routine blood data.
Pharmaco-transcriptomic analysis
In the GSE147690 dataset, we found that PSMB9, STAT1, and TAP1 were also highly expressed in the multidrug-resistant TB group, and the difference was very statistically significant (p-value < 0.01) (Fig. 9A-C). PSMB9, STAT1, and TAP1 may be potential therapeutic targets for multidrug-resistant TB. Therefore, we performed pharmaco-transcriptomic analysis and found that 11 drug compounds, such as estradiol, cyclosporine, and cisplatin, can upregulate the expression of PSMB9. At the same time, acetaminophen and calcitriol can down-regulate the expression of PSMB9. Cyclosporine, dactinomycin, diethylstilbestrol, and other 11 drug compounds can upregulate the expression of STAT1. In contrast, afimoxifene, azathioprine, diclofenac, and other 14 kinds of drug compounds can down-regulate the expression of STAT1, and acetaminophen, estradiol, and methotrexate have (Fig. 9D-F). This will help us in providing new insights into the treatment of multidrug-resistant TB.
Immunohistochemical analysis results
Immunohistochemical staining of PSMB9, STAT1, and TAP1 was performed in 5 patients with spinal tuberculosis and 5 patients with lumbar disc herniation. The results showed that the specific expressions of PSMB9, STAT1, and TAP1 in the experimental group were significantly higher than in the control group (Fig. 10A-F). We used Image J software to detect the positive rate of immunohistochemical images. The positive rate data of PSMB9, STAT1, and TAP1 were imported into SPSS 26.0, and the difference between the two groups was statistically analyzed by independent sample t-test. The positive rates of PSMB9, STAT1, and TAP1 genes in the experimental group were significantly higher than those in the control group (p-value < 0.001) (Fig. 10G-I). It showed that PSMB9, STAT1, and TAP1 were differentially expressed in the experimental and control groups. This result confirms the accuracy of our analysis.
Discussion
Granuloma is an important feature of TB, and it is also a place where M. tuberculosis obtains nutrients and evades immunity, and plays a key role in the spread of TB infection [39,40]. Studies suggest that M. tuberculosis granulomas may be in a hypoxic environment in which M. tuberculosis enters a non-replicating "quiescent" state, thereby enhancing bacterial resistance to antibiotics [41]. Hua Yang et al. found that M. tuberculosis can secrete fatty acid-degrading protein A under hypoxic conditions, regulate fatty acid metabolism, and inhibit the secretion of pro-inflammatory cytokines, thereby inhibiting host immunity so that M. tuberculosis could survive in the granuloma and persist in the host infection [42]. Therefore, the molecular mechanism of hypoxia-related genes in tuberculosis infection deserves further exploration.
By analyzing the differentially expressed proteins between the spinal TB group and the control group, we found that in the GO enrichment analysis, these differential proteins were mainly concentrated in the generation of precursor metabolites and energy, cellular respiration, oxidation of organic compounds to produce energy, aerobic respiration, respiratory electron transport chain, and reactive oxygen species metabolic process. KEGG pathway analysis also showed that these differential proteins mainly concentrated in the ribosome, chemical carcinogenesis-reactive oxygen species, oxidative phosphorylation, and citrate (TCA cycle). Ribosomal stability is very important for the persistence and latent infection of mycobacteria. Under hypoxic conditions, ribosome-associated factor during hypoxia (Rafh) is the primary factor leading to the hypoxic survival of mycobacteria mediated by response regulator dose [43]. All these results indicate that hypoxia is closely related to the pathogenesis of TB.
In this study, we screened out three key hypoxiarelated genes, PSMB9, STAT1, and TAP1, which were highly expressed at the protein and transcriptional levels in spinal TB. Notably, previous studies have shown that PSMB9, STAT1, and TAP1 are all associated with TB. A meta-analysis integrating the transcriptional expression dataset of whole blood of multiple hosts and integrating and comparing different data through the network method found that there is a highly active core gene group in TB, which is composed of 380 genes, of which STAT1 and PSMB9 are the key hubs of the gene group [44]. PSMB9 is an immunoproteasome subunit involved in MHC class I antigen presentation, and the expression of this gene is induced by inflammatory factors, such as interferon-gamma [45,46]. Tetsuaki Shoji et al. found that in cisplatin-resistant lung cancer cell line models, the transcription levels of PSMB8 and PSMB9 were highly expressed, and the protein expression levels were also significantly increased. After treatment with immunoproteasome inhibitors, it was found that immunoproteasomes may be an effective therapeutic target for some cisplatin-resistant lung cancers [47]. STAT1 is a signal transducer and activator of transcription 1, a member of the STAT protein family [48]. This protein can be activated by ligands, such as interferon-alpha and interferon-gamma, and plays an important role in the immune response to viral, fungal, and mycobacterial pathogens [49,50]. STAT1 transcriptional up-regulation in severe COVID-19 patients is a potential predictive biomarker and target for certain interferon pathway-targeted therapies [51]. Tuo Liang et al. also found that STAT1 is related to the pathogenesis of spinal TB and other extrapulmonary TB, which may be involved in M1-macrophage polarization and then cause bone destruction. It is an important biomarker of tuberculosis and a potential therapeutic target [52]. The full name of TAP1 is transporter 1, an ATP binding cassette subfamily B member. In the process of antigen processing and presentation, heterodimer transporters related to antigen processing (TAP) transport peptides produced by immunoproteasome to the endoplasmic reticulum to play immune function. TAP1 and PSMB9 are involved in the formation of heterodimer transporters and immune proteasomes, respectively. When TAP is dysfunctional, pathogenic microorganisms can escape immune surveillance [53,54]. Several studies have shown that abnormalities in the TAP1 gene are closely associated with pulmonary TB [55,56]. In this study, PSMB9, STAT1, and TAP1 have high diagnostic and predictive values for both extrapulmonary TB and TB. These results indicate that PSMB9, STAT1, and TAP1 may play a role in the pathogenesis of TB, such as spinal TB.
In addition, PSMB9, STAT1, and TAP1 were also significantly upregulated in the multidrug-resistant TB group. Pharmaco-transcriptomic analysis showed that estradiol, cyclosporine, cisplatin, and other drug compounds could upregulate the expression of PSMB9, while acetaminophen and calcitriol can down-regulate PSMB9 expression. Cyclosporine, dactinomycin, diethylstilbestrol, and other drug compounds can upregulate the expression of STAT1, while 14 kinds of drug compounds, such as afimoxifene, azathioprine, and diclofenac can down-regulate the expression of STAT1, and acetaminophen, estradiol, and methotrexate have effects on the up and down regulation of STAT1. Dactinomycin, daunorubicin, camptothecin, and other drug compounds can upregulate the expression of TAP1, while arsenic trioxide can down-regulate the expression of TAP1. Cyclosporine is an important immunosuppressant. Its main mechanism is to inhibit the activity of the immune system by inhibiting the activity and growth of T cells [57]. Delayed activation of T lymphocytes and insufficient secretion of related cytokines can lead to pathogenic inflammation, increased bacterial load, spread of infection, and severe disease progression [58,59]. Therefore, T lymphocytes play an important role in immune protection against Mb infection. Many studies have also shown that cyclosporin is associated with an increased risk of activation of TB and latent tuberculosis infection [60]. In this study, we found that cyclosporin can upregulate the expression of PSMB9 and STAT1, which may be one of the mechanisms of cyclosporin-induced increased risk of activation of tuberculosis disease. Calcitriol is the "active metabolite" of vitamin D3. An in vitro study showed that it has antibacterial properties and inhibits the production of pro-inflammatory cytokines [61]. In addition, calcitriol also plays a role in host defense against mycobacterium tuberculosis infection by inducing autophagy of antimicrobial peptides (AMP) and/or colonized macrophages [62]. Klauer et al. first proved that calcitriol could inhibit pathogenic Mycobacterium tuberculosis proliferation in human macrophages [63]. This provides a new reference for the treatment of multidrug-resistant TB.
TB is closely related to the immune response in the body but the immune mechanism of anti-M. tuberculosis antibodies are not completely clear [64]. By analyzing the ssGSEA data, we described the immune cell infiltration of spinal TB. We found that activated dendritic cells, gamma delta T cells, and immature B cells were different in the spinal TB group and the control group, and they were significantly positively correlated with PSMB9, STAT1, and TAP1. Dendritic cells have the function of activating and stabilizing T lymphocytes and B lymphocytes and can differentiate into different immune cells, participate in cellular and humoral responses, and also form complexes with multifunctional APCs, which play a key role in antipathogen activity; they are one of the most important immune regulatory cells [65,66]. Dendritic cells play a role in granuloma formation by inducing the migration of natural killer (NK) cells and T cells in vitro under the stimulation of M. tuberculosis [67]. Gamma delta T cells are unconventional T cells that play an important role in recognizing foreign pathogens and stress signals of infected cells [68][69][70]. In tuberculosis, γδT cells can rapidly recognize M. tuberculosis antigens, respond to the BCG vaccine, inhibit the growth of mycobacteria, and are potential vaccine targets against TB [71]. We also found a significant positive correlation between STAT1 and macrophage, which once again demonstrated that STAT1 might induce M1-macrophage polarization to cause bone destruction in spinal TB. In addition, we analyzed the differences of monocytes/macrophages in patients with spinal TB through the blood routine examination data of 162 normal patients and 237 patients with spinal TB and found that the number of monocytes/macrophages in the disease group was significantly higher than that of normal control groups. This observation verifies the obtained results.
Similar to other studies, our study also had limitations. First, the sample size was inadequate. Taking into account the analysis of large samples, we only used five pairs of 10 samples for the protein park test, which was insufficient.
Second, there are limitations in using routine blood data to check differential immunocyte analysis; tissue-based flow cytometry should be used for further verification. In addition, we do not have more laboratory analysis to verify our results, and this study should be further verified through cell and animal experiments.
Conclusion
PSMB9, STAT1, and TAP1, might play a key role in the pathogenesis of TB, including spinal TB, and the protein product of the genes can be served as diagnostic markers and potential therapeutic target for TB. | 2023-06-20T13:56:51.781Z | 2023-06-20T00:00:00.000 | {
"year": 2023,
"sha1": "b6139f8a0ae2f5655fcc1b7adeb2fe46e2d5c1e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "b6139f8a0ae2f5655fcc1b7adeb2fe46e2d5c1e9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248653711 | pes2o/s2orc | v3-fos-license | Investigation of Heat Transfer and Pressure Distribution in Power Law Fluids Flowing through a Rectangular Channel Blocked by a Single Heated Circular Cylinder at Inlet
In this paper, the flow of a power law fluid has been investigated for Newtonian and non-Newtonian fluids with the temperature distribution in a rectangular channel containing a heated circular cylinder near the inlet with different blockage ratios. The generalized non-Newtonian power law model coupled with the energy equation is solved numerically, considering power law index in the range of 0 . 8 ≤ n ≤ 1 . 2 and Reynolds number in the range from 1000 to 10000. A heated circular cylinder is fixed near the inlet of the channel with blockage ratios (from radius to the height of the channel) of 1 : 10, 2 : 10, and 3 : 10 The governing partial differential equations coupled with energy equation are discretized to investigate the simulation of the current problem with finite element-based software of COMSOL Multiphysics 5 . 4. The results are shown with the help of surface plots, tables, and graphs. The computational results for maximum and minimum pressure around the cylinder, temperature along the center line of the cylinder, and local Nusselt number are specially discussed in detail.
Introduction
Fluid ow around a heated cylinder and heat transfer from the cylinder into the uid due to both free and forced convection possess a great amount of interest among the researchers these days because of such situations in a variety of applications [1][2][3][4][5] such as in the process of extrusion via tubes and cylinders, heat exchangers, drying of textiles, and other materials, processes of puri cation and steam engines, and so on. In some of the cases, heat transfers due to free convection, i.e., molecules of uid move due to di erence in density and temperature, and in some cases, it is due to forced convection, i.e., uid molecules are forced to move by applying external forces. When free and forced convections are of comparable magnitudes, the heat ow is said to be with mixed convection. In this article, we have investigated heat transfer from the heated cylinder placed at inlet of the pipe due to forced convection in a power law uid varying the Reynolds number and power law index. Many researchers have investigated heat ow past cylinder in isolation or many scattered cylinders experimentally or analytically. A brief survey of such literature is provided below.
Khan et al. [6] used the von Karman-Pohlhausen method to obtain analytic solution to the problem of ow around and heat transfer from an in nite circular cylinder. ey obtained expressions for drag coe cient and heat transfer coefficient and analysed them for a wide range of Reynolds number and Prandtl number. Panda [7] studied the hydrodynamics of power law fluids past a pair of cylinders fixed in the domain of flow in side-by-side manner.
He carried out a parametric study by taking power law index in the range of 0.2 ≤ n ≤ 1.8, the Reynolds number in the range of 0.1 ≤ Re ≤ 100, and the gap between two cylinders in the range of 1.2 ≤ G ≤ 4 and observed the influence of Re, n, and G upon stream lines, surface pressure, and drag and drift coefficients. e flow over and heat transfer from an isolated heated cylinder is the simplest model to investigate hydrodynamics, pressure on the surface of cylinder, and heat transfer. In this case, the flow and heat transfer are influenced by flow behaviour index, the Reynolds number, and the cylinder radius [8][9][10][11][12][13]. Sanyal and Dhiman [14] studied hydrodynamics of shear-thinning fluids flowing past a pair of square cylinders with mixed convection heat flow. e side-by-side gap between cylinders is parameterized in the range from 1 to 5; Reynolds number is in the range Re of 1-40 and Pr � 40. ey found that the leading-edge flow separation from the cylinders disturbs the wake structures and vortex shedding patterns in case of shear-thinning fluids which was not earlier observed in case of Newtonian fluids [15]. Another numerical study has been carried out by Haider [16] to analyse the heat flow characteristics of Newtonian fluid past clusters of isothermal cylinders fixed within the flow domain. Cylinders were placed in-line or scattered manner and found that the heat flow from the scattered cylinders is slightly higher than when the cylinders are placed in-line. Kumar et al. [17] have performed a remarkable numerical study using commercial software FLUENT to analyse the forced convection around a heated half cylinder placed in the flow domain with 25% blockage ratio, Pr � 50, and. Re � 1-40. It was found that the drag coefficient magnitude is higher in shear-thickening fluids in comparison with shear-thinning fluids. It was also found by them that the heat transfer rate increases with increase in Reynolds number Re as an overall result. e average Nusselt number was found to have greater values in case of shear-thinning fluids as compared with shear-thickening and Newtonian fluids. ere is always a complex interplay between kinematic and other fluid properties. Mishra [18] investigated forced convection heat transfer from a pair of heated cylinders numerically using COMSOL Multiphysics. Base fluid is taken to be a power law fluid with flow behaviour index in the range of 0.2 ≤ n ≤ 2. A detailed parametric study with values in the range of 5 ≤ Re ≤ 200, 0.7 ≤ Pr ≤ 100, and 0.1 ≤ D/L ≤ 0.3 (diameter to length ratio) reveals that a higher value of heat transfer is observed for higher values of the Reynolds number and Prandtl number in case of shear-thinning fluid. ere are different but remarkable contribution by different researchers studying flow past and heat transfer from heated cylinder under different parametric considerations in literatures [19][20][21][22][23][24][25][26][27].
In current research, we are focused to study the flow of power law fluid through a rectangular channel and heat transfer from a heated cylinder of variable radius fixed near the inlet of the channel. e flow behaviour index is assumed to be in the range of 0.8 ≤ n ≤ 1.2, the Reynolds number in the range from 1000 to 10000, and the blockage ratio (radius to height ratio) is taken to be 0.1, 0.2, or 0.3. e values of the local Nusselt number found in our case are in a great agreement with the correlation values provided by [28]. Flow variables will be further investigated in the parametric study using COMSOL Multiphysics by changing the values of the parameters listed above. In section 2, we have given the problem statement together with the domain discretization and governing nonlinear partial differential equations and boundary conditions. In section 3, a validation study has been carried out in order to compare the results with empirical correlation. Section 4 is dedicated for results and discussion, and finally in section 5, an overall summary of the work has been put and we give concluding remarks.
Problem Formulation
Consider the laminar flow of a power law fluid (time-independent non-Newtonian fluid) through a rectangular channel (of length l � 4 m and height h � 1 m) in which a heated circular cylinder has been fixed in the first half of the channel near inlet as shown in Figure 1. Assume that r denotes radius of the channel and the cylinder has been maintained at a constant temperature T h � 293 K. It is further assumed that (i) Initial or reference temperature of the fluid is assumed to be T ref � 250 K (ii) Walls of the channel are insulated, i.e., heat flux through the walls is zero (iii) Fluid velocity at the boundary is assumed to be nonzero, i.e., the slip boundary conditions will be used (iv) e ratio of radius of the cylinder to the height of the channel r/h is taken to be equal to either 1: 10, 2: 10, or 3: 10
Domain Discretization and Mesh Statistics.
Finite element methods require the domain of interest be divided into small pieces called elements. Here, the domain of the problem is divided into small irregular triangles. Initially, we considered six different discretizations with number of elements N � 5156, N � 10344, N � 12836, N � 25320, N � 58726, and N � 115152. e pictorial representations for each of these discretizations corresponding to these N-values are put in Figure 2.
At the first stage, we will try to obtain solutions for different number of elements (mesh sizes) in order to obtain N-value (say N 0 ) such that all the solutions become mesh-independent, i.e., we can select any number of elements greater than N 0 . It is well-known fact that increasing number of elements improves the solution to higher accuracy, a point will reach when number no further improvement is visible, and we say solutions have become mesh-independent. In Figure 3, we have presented the computed magnitude of velocity of fluid for different mesh sizes. e peak in the graph of velocity is achieved due to the presence of cylinder near the inlet of channel. e convergence of solution is clearly observable from this figure as for N � 58726 and N � 115152 the velocity curves are overlapping. Extra finer meshes will be used for further computations. In case of extra finer mashes, number of triangular elements is N � 58726 and complete mesh statistics in case of extra finer meshes is presented in Table 1.
Governing Equations and Boundary Conditions
To analyse the fluid flow and heat transfer due to convection and conduction from the heated circular cylinder into the fluid domain Ω, the coupled system of equations involving momentum, continuity, and energy balance equations is used as the model.
where v x , v y are x, y-components f velocity, respectively. e flow is governed by the following set of equations. Momentum balance is as follows: Mass balance is as follows: where ρ denotes density which is constant in case of incompressible flows and p denotes the hydrostatic pressure field. In the current problem, we use the power law model (time-independent Newtonian fluid model) according to which the apparent effective viscosity is represented as follows: Journal of Mathematics where K denotes flow consistency index, _ c denotes shear rate normal to the plane of shear, and n denotes flow behaviour index. For n � 1, n < 1, or n > 1, the power law fluids are categorized as Pseudoplastic, Newtonian, or Dilatant fluids. We intend to test the flow behaviour and heat transfer for different values of the flow behaviour index; therefore, we set K � 1.81e − 5(Pa − s n ) and _ c min � 0.01(s − 1 ) to be fixed. e temperature distribution in the fluid flow domain is governed by the following equation.
Energy balance is as follows: In these equations, T(x, y) is the temperature field, c p is the specific heat capacity, q is called the local conductive heat flux density, κ is material's heat conductivity, ∇ → T is the temperature gradient, and _ q v is the volumetric heat source. In the current problem, there are no heat sources _ q v � 0.
Boundary Conditions.
Assume that Ω denotes the domain of solution and zΩ � Γ 1 ∪ Γ 2 ∪ Γ 3 ∪ Γ 4 ∪ Γ 5 denote the whole boundary where Γ 1 , Γ 2 , Γ 3 , Γ 4 , and Γ 5 , respectively, denote the upper wall, lower wall, inlet, outlet, and the surface of the heated circular cylinder. As we are not interested to see the viscous effects near the walls and at the outer surface of the heated cylinder, therefore, slip conditions are chosen. e governing equations (1)-(6) will be discretized and solved subject to the following conditions using Galerkin finite element method implemented using COMSOL Multiphysics 5.4.
Comparison with Empirical Correlation for Validation of the Solutions
In this section, the solutions are validated by comparing them with empirical correlation. e empirical correlations express the Nusselt number as a function of the Reynolds number and the Prandtl number: Nu � f(Re, Pr). In case of the current simulations, we have compared the values of the local Nusselt number Nu x of the fluid flow and heat transfer past a heated circular cylinder with the empirical correlation values given by Lienard [28]; accordingly, the local Nusselt number Nu x is expressed as follows: ese comparisons are shown in Figures 4-6 for different values of the local Reynolds number, cylinder radius, and the flow behaviour index. From these comparisons, it can be deduced that our results agree with the empirical correlation [28] to a good extent specially when we increase the cylinder to height ratio; see Figures 5 and 6. Journal of Mathematics initial temperature T � 250 K. e governing equations (1), (2), and (5) are discretized using the finite element method implemented with COMSOL Multiphysics and solved subject to the conditions given in equations (7)-(11) for pressure distribution, velocity profile, and temperature distribution. Consequently, many different simulations are generated for different values of the parameters listed above. Extra fine meshes are used to discretize the domain of the problem. x-coordinate (m) number whereas by increasing flow behaviour index from n � 0.8 to n � 1.2, we observe that pressure at the front and back surfaces decreases. It can be further deduced that in case of Pseudoplastic fluids, the pressure is lesser at the surface of the circular cylinder in comparison with the case when fluid is dilatant. We further can interpret that increasing radius also increases pressure on the cylinder surface. In Table 2, we present the maximum numerical values of pressure over the surface of the cylinder for different parametric changes. For further analysis, we have plotted maximum pressure graphs on surface of the cylinder against flow behaviour index n for different Reynolds number Re and cylinder radius r (see Figure 9). It can be deduced from these graphs that maximum value of pressure Figures 10-12, respectively. ese figures show that there is significant decrease in thickness of thermal layer along the horizontal line through the center of heated cylinder if the value of the Reynolds number increases from Re � 1000 to Re � 4000 and finally to Re � 10000. is type of investigations has also been reported by [16]. ey have put different cylinders in in-line settings and scattered way and observed temperature changes in the fluid domain. We are confined to determine the changes in is larger value will be responsible of more flow of heat within the fluid. However, increasing radius does not seem to have any significant impact on temperature contours as shown in these figures. Reynolds number are displayed in Figures 13 and 14. e viscosity dissipations can be clearly observed through these figures, and the graphs are plotted for different values of the cylinder radius r.
Conclusion
In the paper, we have discussed the non-isothermal flow through the rectangular channel fitted with a heated circular cylinder at constant temperature T h � 293 K with different radii to height ratios of 1: 10, 2: 10, and 3: 10 near the inlet of the channel. Assuming a power law fluid flowing through the channel at initial reference temperature T ref � 250 K, we were focused to analyse the temperature distribution within the fluid and its dependence upon various factors including the flow behaviour index n and the Reynolds number Re. e flow behaviour index was taken in the range T h � 293 K whereas the Reynolds number was assumed to be in the range T h � 293 K for parametric study. Solutions were obtained using Galerkin's finite element method implemented in COMSOL Multiphysics. Satisfactory results were achieved when compared our solutions with empirical correlation. It is further concluded that [25][26][27] (i) It was found that the local Nusselt number obtained down the stream in this problem is in a very good agreement with that given by the correlation [28] for the current geometry of the problem. (ii) It is found that the pressure increases with the increasing value of the Reynolds number whereas by increasing flow behaviour index from n � 0.8 to n � 1.2, we observe that pressure at the front and back surfaces decreases. | 2022-05-10T15:54:33.762Z | 2022-05-04T00:00:00.000 | {
"year": 2022,
"sha1": "66f074089c84bdf2c54c2ea79af979fe81e3a7c1",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jmath/2022/3374763.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3889527f4691348fea9aa8b5a27db9841552c331",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
} |
119282969 | pes2o/s2orc | v3-fos-license | Photon mass via current confinement
A parity invariant theory, consisting of two massive Dirac fields, defined in three dimensional space-time, with the confinement of a certain current is studied. It is found that the electromagnetic field, when coupled minimally to these Dirac fields, becomes massive owing to the current confinement. It is seen that the origin of photon mass is not due to any kind of spontaneous symmetry breaking, but only due to current confinement.
Introduction
Field theories in three dimensional space time have been a subject of intense study since a couple of decades now. There are several reasons which make such field theories interesting. Firstly often the theories in lower dimensions are simpler than their higher dimensional counterparts. Secondly, it offers new structures like possibility of gauge invariant mass term for gauge field in the form of Chern-Simons term in the action. Interestingly, it was recently found that planar QED with a tree level Chern-Simons term admits a photon which is composite [1]. Theories with Chern-Simons term are found to play an important role in physics of quantum Hall effect and anyonic superconductors [2,3,4,5,6]. Models which exhibit dynamical mass generation and spontaneous chiral symmetry breaking have been constructed and extensively studied [7,8,9,10,11]. In recent years, with the discovery of graphene [12] and topological insulators [13] there has been a renewed interest in the study of lower dimensional field theories.
Colour confinement is one of the still not well understood aspect of QCD. One of the main hindrance is the fact that the low energy dynamics in such theory becomes non-perturbative, which makes dealing with them difficult. To circumvent this difficulty, there have been attempts to assume colour confinement from the beginning and work subsequently to see if one can get some idea about the dynamics of non-Abelian gauge fields [14,15,16]. In a remarkable paper by Srinivasan and Rajasekaran, it was shown that by assuming quark confinement it was possible to get QCD out of it [16]. Confinement has also been studied in theories defined in three dimensional space-time. It was shown by Polyakov that compact planar QED exhibits charge confinement [17]. While the case of non-compact QED was studied by Grignani et. al. [18].
In this paper, it is shown that an assumption of confinement of a certain current gives rise to the photon mass. The theory consider here consists of two species of free Dirac fermions living on the plane, defined such that the theory is even under parity. These fermions are minimally coupled to the photon field. It is found that although the photons in the theory are massive, there is no spontaneous symmetry breaking. It is also shown that when such a theory is defined over a manifold with finite boundary, then there exist massless particles living on the boundary.
In the following section the model is introduced and its various features are discussed. Section (3) deals with the effective action of photon and its mass. Section (4) deals with the case when the theory lives on a manifold with a finite boundary, followed by a brief summary.
In this paper, we are interested in looking at the physical consequences if the current j µ + − j µ − is confined. As pointed out by Kugo and Ojima in the context of QCD, and further discussed at length in Ref. [21], that the statement of color charge confinement can be accurately stated as the absence of charge bearing states in the physical sector of the Hilbert space: Q color |phys = 0. In what follows, we shall work with a stronger condition than the Kugo-Ojima condition, and demand that the physical space of the theory described by Lagrangian (1) should not have any states which carry (j µ + − j µ − ) current, that is: (j µ + − j µ − )|phys = 0 2 . This shall be referred to as current confinement condition henceforth. Since we are demanding a priori that this current confinement condition should hold, it ought to be understood as a constraint. There exists a well known powerful technique to implement such a constraint using what is called the Lagrange multiplier (auxiliary) field [22]. One postulates the existence of a Lagrange multiplier field which is such that its only appearance in the action is via its coupling to the constraint condition. Thus the equation of motion corresponding to this field, obtained by demanding that the functional variation of the action with respect to this field be zero, is simply the constraint condition. It is worth pointing out that such Lagrange multiplier fields have no dynamics of their own, in the sense that there are no terms in the action comprising of spatial or temporal derivatives of these fields to begin with, and their sole purpose of existence is to ensure implementation of the constraint. Thus by enlarging the degree of freedom in the theory by an additional field, one ensures that the constraint condition gets neatly embedded into the action, and hence into the dynamics of the theory. In our case the Lagrange multiplier Bose field is a µ , which is meant to implement the constraint (j µ + − j µ − ), will only couple to it so that the Lagrangian (1) gets an additional term: Note that the equation of motion for a µ field: δS δaµ = 0, gives the constraint j µ + −j µ − = 0. Inorder to preserve parity, a µ field has to be a pseudovector owing to its coupling with pseudovector current. Thus a µ can in general be written as curl of some vector field χ: a µ = ǫ µνλ ∂ ν χ λ , and can not have a contribution that can be written as a gradient of some scalar field. This asserts that a µ can not be a gauge field, since a gauge field under a gauge transformation transforms as a vector ∂ µ Λ, which is not consistent with the pseudovector nature of a µ . Further note that since a µ is curl of χ µ , it immediately follows that its divergence vanishes: ∂ µ a µ = 0. 2 The physical space here stands for the set of states in the vector space of the theory, which do not have negative norm [22]. In case when the negative normed states are altogether absent, then the condition (j µ + − j µ − )|phys = 0 holds for the whole of Hilbert space and hence becomes an operator condition (j µ In functional integral formulation of quantum field theory, generating functional is an object of central importance, which for this theory reads 3 : Here η andη are external sources which are coupled to Fermi fieldsψ and ψ respectively.
Before we proceed with the details of the quantum theory, it is worth pointing out that if one functionally integrates a µ in the above generating functional, one immediately obtains the current confinement condition since a µ appears linearly in the action. This clearly shows that in the quantum theory as well, the Lagrange multiplier field a µ is properly implementing the current confinement constraint.
Since the Lagrangian (4) of the theory is invariant under transformation (3), requirement that the generating functional of the theory should also be invariant under (3), that is δZ = 0, is not unreasonable. Interestingly, it will be seen that this will give rise to Ward-Takahashi identities amongst various n-point functions in this theory and lead to non-trivial consequences. Demanding that Z be invariant under infinitesimal version of transformation (3) means δZ = 0, which can be written as: This can further be simplified to read: In terms of the generating functional of connected diagrams W [η ± , η ± ] = −i ln Z[η ± , η ± ], equation (5) becomes: It is often convenient to work with the effective action Γ[ψ ± , ψ ± ] which is defined to be Legendre transform of , so that equation (6) reads: This is the master equation from which one can get Ward-Takahashi identities connecting various vertex functions, by taking appropriate derivatives. The two-point function for + species of fermions iS F (x − y) = T ψ + (x)ψ + (y) , in terms of Γ is given by: Taking functional derivative of the master equation (7), once byψ + (y) followed by once with ψ + (z), one obtains following Ward-Takahashi identity for fermion Greens function: This implies that S −1 . Above identity is very powerful, since it has allowed for an exact determination of propagator in this interacting theory. Exactly similar identity would also hold for propagator of − species of fermions. It is worth mentioning, that this model is one of the rare cases where full propagator of this theory is known without any approximation. Presence of a physically observable particle in a theory, manifests as poles of propagator in momentum space. In our case, as is clearly evident, the propagator is regular everywhere in momentum space, which means that the Dirac fermion in our theory is not a propagating mode. This is particularly surprising since we started with a free Dirac theory with a constraint condition on currents, and it appears that condition is severe enough to not allow free fermion propagation.
In the absence of Dirac fermions, it is a natural to inquire about the particle excitations in this theory. Inorder to answer this question, it is instructive to study the four-point function in this theory. Apart from a trivial non-propagating solution discussed above, assuming validity of translational invariance, it can be shown that the Ward-Takahashi identity for four point function admits a solution of the kind: where f is some function of (x 1 − x 2 ). This means that this Ward-Takahashi identity allows for propagation of composite operator ψ(x)ψ(y)| x=y , which describes charge neutral excitations consisting of fermion-antifermion bound states. It is worth mentioning, that the absence of fermions as elementary excitations and occurrence of bound states in a constrained theory like above, also appeared in a model of colour confinement proposed by Rajasekaran and Srinivasan [16]. Interestingly, they showed that quarks and gluons (which appeared as bound states) did not propagate and were confined, whereas mesons (colour neutral bound states of quarks) were propagating excitation in their model.
Electromagnetic response
In this section we focus our attention on the electromagnetic response of the theory. Lagrangian (4) with the minimal coupling of fermion fields to the photon field is given by: In order to find the response of the theory under the influence of photon field, it is imperative that the photon field be treated as an external field. Terms involving ghosts and gauge fixing, which are absent in the above Lagrangian, have been incorporated by appropriate modification of measure D[A µ ]. Inorder to take into account effects due to quantum corrections, which arise from virtual fermion loop excitation, one needs to find out the effective action by integrating out fermion field. The effective action upto quadratic terms in fields, obtained using derivative expansion of fermion determinant [24,25] reads: It can be shown that, in the limit of large m this approximation is valid and higher order terms can be neglected. As is evident, a µ did not have a kinetic term to start with, but fermion loops have made it dynamical. Further, A µ and a µ fields are coupled by a mixed Chern-Simons term, which has a topological nature [26,27,28,29,30]. In other words, this implies that a µ field has now become electromagnetically charged due to presence of virtual fermion cloud around it, with current being given by J µ = ǫ µνρ ∂ ν a ρ , which is conserved off shell by construction. It is interesting to note that in this effective Lagrangian, the pseudovector field a µ is coupled to dual of F µν so that the effective action is even under parity. We started with a theory consisting of two species of massive Dirac fermions, with the assumption of current confinement. The current confinement being an independent condition, in the sense that it is not a consequence of the equations of motion of the theory, was understood as a constraint. We employed a judicious way of implementing this constraint using the Lagrange multiplier field a µ , which essentially does book keeping of the constraint. Even though constraint condition is stated in terms of fermion fields, it can not be viewed in isolation since the Dirac fields are coupled to the photon field. Thus even after integrating out the fermion fields, the effects of current confinement condition survive and manifest as coupling between a µ and A µ in (10). Since the role of a µ is only to ensure implementation of the constraint, it is imperative to integrate it out to see the effect of current confinement on the dynamics of the photon field. On integrating out a µ field from Lagrangian (10), one arrives at an effective action for electromagnetic field: As is evident, interaction with a µ field has induced gauge invariant mass M = 12|m| π for the physical electromagnetic field [31]. One may wonder that the differential operator 1 ∂ 2 in the Lagrangian may compromise locality and causality. However it has been shown in Ref. [31] that both of these features are intact. The action of 1 ∂ 2 on a function becomes transparent by going over to Fourier space 4 , that is: 4 Alternatively, one may also consider the action of this differential operator in terms of convolution by a suitable Greens function G(x), subject to the appropriate boundary conditions of the problem. The Greens function G(x) is defined to solve: ∂ 2 G(x) = δ(x). With the knowledge of boundary conditions, formally this can be inverted: wheref (p) is Fourier transform of f (x). Occurence of such terms in action have been long known, for example it is known to appear in the effective action of Schwinger model when one integrates out fermions, as also in the action of two dimensional gravity theory studied by Polyakov [31].
It is known that in the planar world there exists Chern-Simons Lagrangian: which is also gauge invariant and describes massive photon field [27]. However unlike Lagrangian (11) the mass term in this theory has a topological origin, and the theory evidently violates parity. It may be tempting to believe that the photon mass term in the theory occurs because of some kind of spontaneous symmetry breaking and associated Anderson-Higgs mechanism. However, note that the theory is invariant under two kinds of continuous rigid transformations, which are generated by two conserved charges Q 1,2 = d 2 x : j 0 + ± j 0 − :. Vacuum expectation value of conserved current vac|j µ ± (x)|vac can be written as [32]: where tr stands for trace over Dirac indicies. Since S F,± (x − y) = const. × δ(x−y) in this theory, one finds that vac|j µ ± |vac = 0. This straightforwardly implies that, the charges Q 1,2 annihilate the vacuum Q 1,2 |vac = 0 in this theory. This emphatically shows that there is no spontaneous symmetry breaking whatsoever, and that the current confinement is responsible for the photon mass.
Boundary theory
In above discussions we have assumed that the theory lives on two dimensional manifold whose boundary lies at the infinity, and further all the fields in discussion were assumed to decay sufficiently quickly so that surface terms in the action contribute negligibly. In this section we shall consider the case when the boundary is finite, in which case it may not be possible to ignore contribution due to the surface terms.
As noted above, low energy effective action describing the dynamics of low energy electronic excitation, subject to the current constraint, coupled to electromagnetic field is given by (10): Note that the last mixed Chern-Simons term is not invariant under local gauge transformation: A µ → A µ + ∂ µ Λ, where Λ is some regular function of x. As a result, the change in action is given by: Above volume integral can be converted to a surface integral, defined on closed boundary of the manifold, to give an action: This term, as it stands, is not gauge invariant, and is defined on the boundary, which encloses the bulk. Gauge invariance of any given theory, is a statement that, the theory is constrained, and possesses redundant variables. We observe that, our theory to start with was gauge invariant at classical level. One loop corrections arising out of fermion loops, generate Chern-Simons term, which exhibits gauge noninvariance. Because, our theory to start with was gauge invariant, and hence constrained, consistency demands that quantum(corrected) theory should also respect the imposed constraints, and hence should be gauge invariant. The occurrence of above gauge noninvariance, simply implies that one is only looking at one particular sector of the theory, and there exists other dynamical sector, whose dynamics is such that it compensates with the one above to render the total theory gauge invariance. Following Ref. [5], we demand that there must exist a corresponding gauge theory living on the boundary, defined such that it contributes a gauge noninvariant term of exactly opposite character and hence cancels the one written above. The simplest term, living on boundary, that obeys above condition is: where θ(x, t) is a Bose field, which transforms as θ → θ + Λ under a gauge transformation. In general, this scalar field would be dynamical, and with a gauge invariant kinetic term, the boundary action reads: Owing to its peculiar transformation gauge transformation property, a quadratic mass term for θ is not gauge invariant. Hence, in a gauge theory framework like this, θ field remains massless. Since the coupling of θ field with a field is anomalous, it turns out that the chiral current in this quantum theory is no longer conserved.
Conclusion
In this paper, we have shown that, in a parity invariant theory of two free Dirac fields living on a plane, confinement of current j µ + − j µ − gives rise to the photon mass. To the best of our knowledge this is the only model in which current confinement paves the way to the gauge boson mass. A unique feature of this mechanism of photon mass generation is that there is no kind of spontaneous symmetry breaking involved. It is found that in case when the theory is defined on a manifold with a boundary, consistency implies the existence of massless particles on the boundary. It would be interesting to investigate whether it is possible to have a composite photon from confinement, as was seen in planar QED with a tree level Chern-Simons term [1]. Further, it is believed that the connection between gauge boson mass, compositeness and current confinement, as seen in this theory, could have some implications in the theory of strong interactions -QCD. Work along these lines is in progress and shall be published in due course. | 2017-06-01T09:03:47.000Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "1e5b6d61f42d7300a8ac81f8bb33b2bd82b14923",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2017.05.088",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "1e5b6d61f42d7300a8ac81f8bb33b2bd82b14923",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
20891942 | pes2o/s2orc | v3-fos-license | Bichromatic homodyne detection of broadband quadrature squeezing
We experimentally study a homodyne detection technique for the characterization of a quadrature squeezed field where the correlated bands, here created by four-wave mixing in a hot atomic vapor, are separated by a large frequency gap of more than 6 GHz. The technique uses a two-frequency local oscillator to detect the fluctuations of the correlated bands at a frequency accessible to the detection electronics. Working at low detection frequency, the method allows for the determination of both the amplitude and the phase of the squeezing spectrum. In particular, we show that the quadrature squeezing created by our four-wave mixing process displays a noise ellipse rotation of $\pi/2$ across the squeezing spectrum
Introduction
Squeezed light, where the uncertainty on one of the field quadratures is brought below the limit given by the balanced application of the Heisenberg uncertainty principle, can improve the sensitivity of classical measurements when their operation is limited by the quantum noise of classical light. It has been experimentally demonstrated that squeezed light can increase the signal-to-noise ratio in spectroscopy [1], theoretically proposed to improve the sensitivity of interferometry in large band of frequencies [2], and it has been suggested that spatially multimode squeezed light can improve the resolution of optical imaging [3]. In quantum information science squeezed states are the workhorse for obtaining continuous variable entanglement for unconditional protocols [4]. The state-of-the-art methods for the generation of squeezed light include nonlinear processes parametric down-conversion (PDC) in solid-state crystals [5], fourwave-mixing (4WM) in atomic vapours [6] and both PDC and 4WM in optical fibres [7].
In the case of continuous pumping, these nonlinear parametric processes induce quantum correlations over a finite bandwidth on pairs of sidebands symmetrically placed about a center frequency (CF), which is determined by the frequency of the pump light. If the nonlinear process is frequency degenerate this results in a continuous range of pairwise correlated frequency sidebands centered on and containing the CF. If the process is frequency non-degenerate the correlated sidebands are contained in two disjointed frequency ranges symmetrically placed with respect to the CF. The two possible superpositions of two correlated sidebands both display squeezing on one of their quadratures and they are collectively referred to as a two-mode squeezed state (TMSS).
The orthodox scheme to detect quadrature squeezing is the balanced homodyne detection [8], whereby the correlated sidebands beat with a strong local oscillator (LO) field at the CF. This method is most appropriate in the degenerate case, where the frequencies of the LO and the sidebands are close. In this case the squeezing spectrum, i.e. the range of frequencies at which the beats display quantum correlations, should start at DC. Noise measurements near DC however are greatly influenced by 1/f technical noise, so in practice the light noise is measured at a high enough analysis frequency (AF), away from the technical noise band.
In the non-degenerate case, where the frequency separation between correlated sidebands are potentially large, the homodyne detection of these states with a LO at the CF can become experimentally challenging due to the limited frequency response of the low-noise photodetectors and electronics. The solution to this problem is to use a two-frequency-component local oscillator, i.e. a bichromatic local oscillator (BLO), also referred to as two-tone LO. The BLO homodyne detection method has been theoretically proposed for configurations where the correlated sidebands are separated by the use of a cavity [9] and for arbitrary sideband frequency separations by Marino et al. [10]. Since then there have been few realizations of the concept. For instance multi-frequency LO homodyne detector set-ups have been experimentally realized for the detection of entanglement in cluster states of optical frequency combs in both CW [11] and pulsed [12] regimes. More relevant to the present work, quadrature-squeezed light generated in optical parametric oscillators (OPO) has been detected using BLO employing frequency separation of few MHz, in order to avoid limitations imposed by technical noise at low analyzing frequencies [13]. In atomic vapors, the BLO technique has been recently used to detect multispatial-mode quadrature squeezing in a continuous squeezing bandwidth with large sideband separation [14].
In our experiment we use non-degenerate 4WM to create a TMSS that contains two correlated light modes separated by 6 GHz as shown in Fig.1(a). As these modes are spatially distinct we combine them to generate a single spatial mode squeezed state (SSMSS). We show that the BLO homodyne detection setup can be used to fully reconstruct the squeezing spectrum of the SSMSS state. By controlling the phase of the BLO, we measure the rotation of the noise ellipse phase (NEP) which results from the dispersion of the medium.
Homodyne detection
The canonical technique to detect small fluctuations on a signal field is the balanced homodyne detection. This technique uses a LO that is overlapped with the signal mode to be analyzed on a 50/50 beamsplitter (BS HD ). The two output intensities are detected by a balanced photodetector which forms the photocurrent difference [ Fig.1(b)] and the trans-amplified signal is spectrum analyzed. Let us calld 1 andd 2 the photon number operators in the two output arms of the BS HD . They can be expressed through the positive frequency part of the signal electric fieldÊ S and the LO electric fieldÊ LO as:d up to a constant factor. With the above, the subtracted photocurrent has the form: Let us consider a coherent state LOÊ LO =be −iω LO t with b = |β |e iϕ LO , whereb, β , ω LO , and ϕ LO are the annihilation operator, amplitude, frequency and phase at the BS HD of the LO, respectively. The signal field contributing to the component ofî − at frequency Ω is then the two-mode field, up to a constant factor: with ω ± = ω LO ± Ω, andâ ± are the annihilation operators of the corresponding signal bands at frequencies ω ± . The quadratures of the signal field arê When the signal is a TMSS, correlations between the bands at frequencies ω + and ω − lead to fluctuations of the quadratures taking the form [15]: where s is the degree of squeezing and θ is the squeezing angle, which is referenced to ϕ LO . One can derive the variance of the homodyne detector signal in the limit of a LO much stronger than the signal [15]: We have introduced the phase χ, which also accounts for the more general case where the phase reference for θ is taken at a different location from the BS HD . One can take , where ϕ ± are the propagation phases for modesâ ± , and χ ± are the differences between the phase of LO and signal bands. The minimum noise is observed for χ = θ /2 and it is below the shot noise given by 2|β | 2 .
In the single frequency LO scheme, the signal is at the beat frequency Ω = ω + −ω − 2 between the sidebands and the LO. This technique is more useful when the frequencies of the correlated sidebands are close i.e. for small Ω, as analyzing quantum noise at high frequencies can be challenging or even impossible. For large Ω, the solution is to use a bichromatic local oscillator with two frequency components near the corresponding frequencies of the TMSS correlated sidebands i.e.Ê BLO =b 1 e −iω L1 t +b 2 e −iω L2 t , withb 1,2 the annihilation operators for the fields at frequencies ω L1,L2 , where these components can be described as coherent states b 1 = |β 1 |e iϕ L1 and b 2 = |β 2 |e iϕ L2 , with amplitudes β 1,2 and phases ϕ L1,L2 . The noise response at low analyzing frequency is then dominated by the beat of the correlated sidebands with their nearest BLO components. The theory of the BLO detection method is described in detail in [10] and here we will only write up the result for the subtracted photocurrent operator for the case when the LO components have the same amplitude and are symmetrically placed about the CF, i.e. for The last equation shows that the homodyne detector signal depends on the relative phase differences χ 1 = ϕ L1 − ϕ − and χ 2 = ϕ L2 − ϕ + between the local oscillator components and their corresponding TMSS sidebands. Although Eq. (7) was derived for a single pair of correlated sidebands, each frequency component of the BLO each interacts with a pair of sidebands therefore there exist two pairs of detected sidebands, lying within the squeezing spectrum or not, whose noises add in quadrature. The effect of these so-called image bands [10] is discussed later.
Mode-matching of correlated sidebands in non-degenerate 4WM
We consider non-degenerate 4WM in a hot Rb vapor. In this process two photons from a strong pump beam interact with a Rb atom to create a pair of probe (p) and conjugate (c) (or stokes and anti-stokes) photons at frequencies oppositely detuned from the pump as shown in Fig.1(a). The TMSS state generated in this manner is entangled across probe and conjugate sidebands of two spatially distinct modes [16], separated by twice the hyperfine splitting of the ground state, 2ω HF . The phase-matching condition of the 4WM process requires the generated TMSS components to propagate symmetrically about the pump axis at a finite angle [17]. In order to create the SSMSS, we overlap the two separated modes on a 50/50 beamsplitter (BS M , M for "mixing"). The result after the overlap is shown in Fig.1(c). It is important to note that each of the left (L) and right (R) input modes of the BS M contains probe and conjugate components [ Fig.1(c)] and that these components may interfere at the BS M with different phases. This phase mismatch between the probe and conjugate components of the L and R channels plays an important role in the transformation from TMSS to SSMSS.
Let us consider the SSMSS state on one of the outputs of the BS M as shown in Fig.1(d). The operators for the fields at the probe and conjugate frequencies are: whereâ mn with m = p, c and n = L, R are the photon annihilation operators of probe and conjugate components in the L and R input channels and ϕ mn are their phases at the BS M . In other words the output is the superposition of two TMSS, containing probe in L and conjugate in R (pL − cR) and vice-versa (cL − pR). For symmetry reasons these two TMSS are described by a single complex squeezing parameter se iθ . Substituting Eq.(8) into Eq.(4), replacing the indexes − and + with p and c, we obtain the variance of the quadrature operators as [15] where ϕ t = (ϕ pR + ϕ pL ) + (ϕ cR + ϕ cL ) and ∆ϕ = (ϕ pR − ϕ pL ) − (ϕ cR − ϕ cL ). The propagation phases ϕ t and δ ϕ can be expressed by the sum and difference of the geometrical paths l L and l R of left and right modes, and the probe and conjugate frequencies ω p and ω c as: From Eq.(9) one can see that for ∆ϕ = π theX andŶ quadratures of the combined state display noise above shot noise regardless of the squeezing angle θ . On the contrary, the condition ∆ϕ = 0 leads to variances similar to those in Eq.(5). This can be understood by looking at the overlap of the two TMSS at the BS M in phase space [ Fig.1(c)]. Each component of the TMSS state (pL − cR or cL − pR) is independently transformed into an SSMSS as shown in Fig.1(c) and the best overall squeezing is obtained for ∆ϕ = 0. This can be achieved by insuring that the difference between l R and l L is much smaller than the wavelength associated with the frequency difference ω p − ω c [see Fig.1(d)]. With these assumptions the noise in the quadratures is simplified to: The last expression recovers the familiar form of the quadratures noise as given in Eq.(5) apart from a common phase ϕ t . This extended analysis is necessary due to the bichromatic nature of the generated correlations [18]. The last step in the analysis is to obtain an expression for the SSMSS noise, as discussed in Sec.2.1. In our case the correlations are bichromatic and the homodyne detector uses a BLO. Then the variance of the detector subtracted photocurrent is given by Eq.(7) and has the form In the last expression we have substituted ϕ ′ = χ p + χ c , with χ p = ϕ LO p − ϕ p and χ c = ϕ LO c − ϕ c being the phase differences between each of the components of the BLO, and the corresponding sideband of the SSMSS.
Experimental setup
A simplified experimental setup is shown in Fig.2(a). A heated rubidium cell is pumped by two pump beams separated vertically, producing two non-overlapping 4WM amplifiers. One of the amplifiers is seeded at ω p to produce the bright components of the BLO at ω p and ω c , while the other amplifier generates the correlated modes of the TMSS. The seed beam is generated via an acousto-optic modulator (AOM) in double-pass configuration with an RF drive frequency approximately matching half of the ground state hyperfine splitting ω HF . A mixing beamsplitter, the same as BS M in Sec.2.2, overlaps the modes of the TMSS and produces the SSMSS at one of the outputs. Equivalently, the bright fields are mixed on the same BS M to produce the BLO. This method of producing the BLO ensures phase stability of the components of the BLO with respect to the signal as well as automatic spatial mode matching [16,14].
For detection, the BLO and the SSMSS from one of the BS M outputs are further overlapped on a second 50/50 beamsplitter, the same as BS HD in Sec.2.1, at the balanced homodyne detec- tion stage. The relative phase between the SSMSS and the BLO controls the measured quadrature and is tuned by changing the BLO pathlength with a piezo-electric actuator. This means that both ϕ LO p and ϕ LO c are scanned simultaneously. The output channels of the BS HD are detected by a balanced photodetector and the produced photocurrent noise is observed on a spectrum analyser in zero span. Sample noise traces are presented in Fig.4.
As described in Sec.2.2, adjusting the difference between paths l R and l L allows for efficient transformation of the TMSS correlations into a SSMSS. This is done by canceling the path difference between the gain region and the BS M , as shown in Fig.1(d). Practically, a way to achieve this is to temporarily seed the 4WM process responsible for the squeezed vacuum generation at the probe frequency, in both the L and R directions. The two output modes L and R contain bright beams at probe and conjugate frequencies and form a bichromatic interferometer whose fringe visibility can be used as a benchmark for minimizing the pathlength difference ∆l = l R − l L ("white light" fringes). In the experiment, the pathlength difference is altered and the quality of the overlap is monitored by observing the interference contrast for the bichromatic light. The achieved visibility is more than 99%, which indicates a pathlength difference of less than 2mm.
Squeezing spectrum
The 4WM process operates close to zero two-photon detuning and therefore generates correlations between sidebands symmetrically placed at ω HF about the pump frequency, inside a bandwidth σ , as discussed in Sec.1. The detection of these correlations is done at a finite analysis frequency of ω A = 2π × 1MHz, away from the 1/f technical noise band. In this case, the measurement at ω A = 0 cannot distinguish between positive and negative frequencies and the complete treatment requires inclusion of image bands symmetrically around the components of the BLO [see Fig.3(a)]. The measured field is then the vector sum of the fields at frequencies Fig. 4. Homodyne detector signal for two different two-photon detunings δ 1 = 2π × 12 MHz (green) and δ 2 = 0 (blue). The measurement is taken in a single run by alternating the AOM driving frequency. The lines are cosine fits to the data and the black line represents the shot-noise-level.
±ω A for probe and conjugate components of the BLO and their fluctuations add independently (in quadrature). In our case the squeezing bandwidth is 2π×40 MHz, much larger than ω A as shown in Fig. 3(b). In these conditions both contributions ω A and −ω A of the photocurrent noise are a good representation of the noise at DC, that is to say the squeezing at the BLO components frequencies. Careful measurement to determine the squeezing at DC has been done in [19]. Figure 3(b) shows the squeezing spectrum, obtained by scanning the frequency bands of the BLO, and the corresponding effective 4WM gain. The latter is the amplification factor of a probe seed propagating through the nonlinear medium. At high δ , the amount of squeezing mirrors the gain as expected from quantum amplification theory [20]. At low δ , excess noise is observed as the effective 4WM gain peaks. This is due to the resonant gain being made of a large pure gain compounded with large absorption [17], which results in a loss of quantum correlations between the probe and conjugate bands. Note that a similar effect could result from a lack of alignment at the BS M for the signal and/or the BLO because of propagation effects such as cross-Kerr effect close to resonance. As the system is fundamentally multi-spatialmode [16], misalignment on the BS M mixes in uncorrelated noisy modes, resulting in measured excess noise. We have checked that the overlap remains constant by comparing the quadrature squeezing spectrum of Fig.3(b) with the intensity-difference squeezing spectrum [21], which does not depend on alignment quality. The conclusion is that the spectrum in Fig.3(b) accurately reflects the amount of quadrature squeezing present in the system.
Noise ellipse rotation
The BLO homodyne detector operating at a small analyzing frequency, effectively DC, allows for the squeezing angle dependence on the sidebands frequency to be determined. In other words, not only the amplitude s(δ ) but also the phase θ (δ ) of the squeezing spectrum can be measured, down to an overall phase. The measurement is performed by recording the phase of the BLO that gives the lowest noise measurement (i.e. the angle of the small axis of the noise ellipse) as the BLO detuning δ is scanned. In practice, because in our experiment the relative phases between signal components and BLO components are not stable over the long term, the homodyne measurement of the signal quadrature is performed by rapidly alternating the value of the BLO detuning, here the two-photon detuning δ of the seed beam, with a reference value δ 0 = 2π × 4 MHz, as the phase of BLO is scanned. An example of the resulting noise measurement is shown in Fig. 4. Besides the change in squeezing amplitude, it is obvious that the measured squeezing phase is identical for both BLO frequencies and we have checked that this is the case across the squeezing spectrum. This is to be expected since the BLO is generated in a similar fashion as the quadrature-squeezed vacuum itself. As a result, the frequency components of the BLO are subject to the same retardation effects as those for the sidebands of the signal against which they beat.
Since by design the BLO is at a constant phase with respect to the signal across the squeezing spectrum, evaluating a possible noise ellipse rotation of the squeezing is here a matter of measuring the phases ϕ LO p and ϕ LO c of the BLO frequency components generated by the 4WM process as they scan the squeezing spectrum. This is achieved using a heterodyne beatnote tech-nique [22] whereby the 4WM process is seeded with two beating frequencies, a fixed one as a reference and a variable one. Both components are amplified and produce conjugate components which also beat [see Fig. 2(b)]. By recording how the phases of the resulting output amplitude beats are shifted with respect to the phase of the input beat, one can reconstruct the output phases as a function of δ . Figure 5(a) shows the result of such measurements across the squeezing spectrum, where the phases shifts have been referenced to the phase of the beat on the seed before the 4WM medium.
From Eq. (12) one can see that the value of the phase sum ϕ LO p + ϕ LO c which minimizes the noise reflects, up to a constant, the value of the squeezing angle θ . Figure 5(b) displays the average (ϕ LO p + ϕ LO c )/2, which represents the noise ellipse orientation, as a function of δ and shows an ellipse noise rotation of up to π/2 across the squeezing spectrum.
Discussion
Broadband squeezed light that exhibit noise ellipse rotation can be used to improve sensitivity of suspended mirror interferometers, e.g. gravitational wave detectors, beyond the standard quantum limit [2]. Amplitude squeezing at low frequency minimizes the radiation pressure noise and phase squeezing at high frequencies minimizes the photon counting noise. In Sec.4.2 we demonstrated that the SSMSS noise ellipse rotates by π/2 across the squeezing bandwidth. As shown in Fig. 5(c), combining this squeezed vacuum with a single bright carrier at the CF transforms it into amplitude squeezing at the low-frequency end of the squeezing spectrum and phase squeezing at the high-frequency end. Note however that like in the case of a singlefrequency LO at the CF, the squeezing spectrum is centered at a frequency much higher than the squeezing bandwith, which is unsuitable for a suspended interferometer.
It is legitimate to ask whether a bichromatic bright carrier could solve this issue. Having a pair of carriers in the middle of each correlated band frequencies [ω p and ω c in Fig. 5(c)], it is always possible to choose their relative phase to ensure amplitude or phase squeezing at DC or very low analyzing frequency, as it was done in Section 4.2. It is however clear from Fig. 5(c) that at higher analyzing frequency the noise ellipses of the contributing image pairs of correlated bands have a different orientation and collectively generate excess noise.
It is possible to place the bright carriers on the edges of the correlated bands, at frequencies ω ′ p and ω ′ c in Fig. 5(c). In this case the carrier phases can be set so that the correlated sidebands induce amplitude and phase squeezing at low and high analyzing frequencies respectively. As usual image sidebands will also contribute to the signal, but since they lie outside the squeezing spectrum in a region where the 4WM gain is unity, they will contribute half the shot noise, limiting the measurable amount of squeezing to 3 dB across the squeezing spectrum [10]. Note that these limitations do not apply to the configuration for which the 4WM process is made to operate as a phase-sensitive amplifier, as demonstrated explicitly by Corzo et. al [19].
Conclusion
We have investigated the BLO homodyne detection scheme and shown its indispensability to analyze bichromatic SSMSS where the correlated bands are separated by a frequency range inaccessible to low-noise electronics and detectors. We have measured the squeezing bandwidth of the SSMSS generated via non-degenerate 4WM to be of the order of 2π×40MHz and shown that in this bandwidth the squeezed state noise ellipse rotates by about π/2, which is equivalent to swapping initial amplitude squeezing to phase squeezing.
Although the band separation was 6 GHz, we expect the method to be applicable to any separation provided phase stability can be ensured between the frequency component of the BLO and the signal bands. | 2016-09-11T19:09:30.000Z | 2016-09-11T00:00:00.000 | {
"year": 2016,
"sha1": "f7ebffaa0beaed77a03b51865eecafc4ce8894c7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.24.027298",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f7ebffaa0beaed77a03b51865eecafc4ce8894c7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
37321406 | pes2o/s2orc | v3-fos-license | HETEROTOPIC PANCREAS REVEALED BY POST-TRAUMATIC PANCREATITIS
acterized by pancreatic tissue found in ectopic locations at various sites of the body, most frequently in the gastrointestinal tract. This anatomic variation is quite frequently observed in postmortem examinations but is very rare and difficult to demonstrate by non-invasive imaging modalities. In this case, we would like to emphasize the relevance of a well-targeted high-definition ultrasound study to characterize non specific tissular abnormalities observed on the first imaging modality. HP is most often asymptomatic but can present the same pathology than normotopic pancreas or lead to mechanical complication due to aberrant localization. Pancreas inflammation can be idiopathic but is most often cause by biliary or gallstones and alcohol. Less common causes are auto-immune pancreatitis, drug-induced pancreatitis, vasculitis, viral infections, hypertriglyceridemia or hypercalcemia, porphyrias and direct trauma of the gland (including post ERCP). The diagnostic criteria for pancreatitis combine characteristic abdominal pain with serum elevation of amylase and/or lipase, and characteristic findings of acute pancreatitis on CT scan.
Heterotopic pancreas (HP) is characterized by pancreatic tissue found in ectopic locations at various sites of the body, most frequently in the gastrointestinal tract.This anatomic variation is quite frequently observed in postmortem examinations but is very rare and difficult to demonstrate by non-invasive imaging modalities.In this case, we would like to emphasize the relevance of a well-targeted high-definition ultrasound study to characterize non specific tissular abnormalities observed on the first imaging modality.
HP is most often asymptomatic but can present the same pathology than normotopic pancreas or lead to mechanical complication due to aberrant localization.Pancreas inflammation can be idiopathic but is most often cause by biliary or gallstones and alcohol.Less common causes are auto-immune pancreatitis, drug-induced pancreatitis, vasculitis, viral infections, hypertriglyceridemia or hypercalcemia, porphyrias and direct trauma of the gland (including post ERCP).The diagnostic criteria for pancreatitis combine characteristic abdominal pain with serum elevation of amylase and/or lipase, and characteristic findings of acute pancreatitis on CT scan.
Case report
An 82-year-old woman presented to the emergency room with a 12 hours history of increasing continuous epigastric abdominal pain.Symptoms began shortly after the patient received a blow on the epigastrum by falling on a small table.
Nausea, vomiting or melena were absent and the patient was apyretic.At physical examination central abdominal soreness was found, but peristaltism was still present and defense, rebound or organomegaly JBR-BTR, 2012, 95: 83-86.Unenhanced abdominal CT was performed on admission (Fig. 1) and revealed an inflammatory round mass snuggled up to the duodenojejunal flexure at the angle of Treitz.Peripheral mesenteric fat stranding, localized focal bowel wall thickening and ill-defined fluid collection along the proximal jejunum were associated.The round inflammatory mass had a lobulated appearance with fatty infiltration strongly resembling to elderly pancreatic tissue.
HETEROTOPIC PANCREAS REVEALED BY POST-TRAUMATIC PANCREATITIS
High resolution ultrasound study was secondarily performed with a linear probe (3-9 MHz) and clearly delineated a 2,5 cm round echogenic homogenous mass, surrounded by an arciform non-peristalting and thickened duodenojejunal loop (Fig. 2).A central Y-shaped ductal system connected by a single duct to the thickened bowel wall was clearly delineated within the mass confirming the presumed diagnosis of heterotopic pancreatitis.
Classic conservative treatment was proposed with imaging and biological follow-up.Spontaneous recovery was obtained.
Pancreatic serum tests reached a maximum level 3 days after admis-with a signal similar to that of the normotopic pancreas (Fig. 3).Unfortunately, this exam was of poor quality because of the difficulties of the patient to stay in apnea during the acquisitions.Moreover peripheral mesenteric fat infiltration and fluid sion (amylase 170 IU/l, lipase 564 IU/l) and regained normal levels 18 days later (amylase 95 IU/l, lipase 61 IU/l).
MR imaging was performed 10 days after admission.Axial T1weighted series showed an area collections disturbed the precise visualization of the ectopic pancreas.The normotopic pancreas had a normal appearance.
A B
Abdominal CT with intravenous contrast agent injection was performed 6 months later.Fat infiltration and fluid collections had completely disappeared and the presumed heterotopic pancreas actually showed an homogenous enhancement pattern similar to that of normotopic pancreatic tissue (Fig. 4).creas divisum, annular pancreas, agenesis of the dorsal pancreatic bud and ectopic pancreatic tissue.
Heterotopic pancreas (HP) is defined as aberrant but well-developed pancreatic tissue lacking anatomic and vascular continuity with the main body of the pancreas.HP incidence ranges from 1% to 14% in literature (1).The most frequent localizations are the stomach, duodenum, jejunum and ileum (including Meckel's or other diverticula).Less common sites include the liver, spleen, esophagus, biliary tract, fallopian tubes, mesentery and omentum, mediastinum or even umbilicus (1,2).At least, heterotopic pancreatic tissue is frequently observed in gastric duplication cysts (3).
HP can have a submucosal localization (75%), or can be present within the muscularis propria or the serosal surface of the GI tract (4).
Variable amounts of pancreatic acinar and islet tissue are seen.The heterogeneity of these microscopic features is codified by the Heinrich classification.Class I lesions contain pancreatic acini, islets, and ducts; class II lesions contain acini and ducts but no islets; and class III lesions are composed of ducts alone.
The proposed pathogeneses are transplantation of pancreatic cells to adjacent structures during embryonic development or metaplasia of multipotent endodermal cells.
HP can be seen at any age, but because of its slow growing it is most often observed in adults (5).Moreover, in most cases HP remains asymptomatic and is an incidental finding.Symptomatic HP is usually found in the stomach or duodenum, with complaints of epigastralgia mimicking peptic disease (1).
Potential complications of HP are mass effect causing bowel intussusception or obstruction, acute pancreatitis, and less frequently bleeding, cystic degeneration or malignancy of the exocrine or endocrine ectopic tissue.
The need for treatment depends on symptoms and definite diagnosis, excluding particularly a malignant process (6).In our case, conservative treatment was privileged, like for classical mild entopic pancreatitis, after confirmation of clinical, biological and imaging recovery and owing to the patient old age.However, some investigators recommend surgical treatment (7)(8)(9), especially if diagnosis remains unclear.
Incidental finding of HP does not require any operation (8).
Pancreatic lobules and fatty interstitium were better characterized.There was no expansive process.
The definite diagnosis of acute post-traumatic heterotopic pancreatitis was finally made on the basis of multimodality high-quality imaging, biological and clinical follow-up.
Discussion
Anatomical congenital pancreatic abnormalities are classified as: pan-
A B
Heterotopic pancreatitis can lead to hemorrhage, necrosis, bowel perforation and acute or chronic inflammation, although being usually only as a microscopic finding.Late complications like pseudocyst formation have been reported (10).During acute HP inflammation, the elevation of serum amylase and lipase levels remain rather limited due to the small volume of pancreatic tissue in the heterotopic pancreas.In some systemic cause of pancreatitis, like drug-induced or autoimmune pancreatitis, simultaneous inflammation of the normotopic and ectopic pancreas can be observed (11).
In our patient, typical CT appearance of elderly pancreatic tissue with lobulation and fatty infiltration was observed within the normotopic and the heterotopic pancreas, facilitating their characterization.The US demonstration of a central ductal system within the HP tissue was also of most importance to establish the correct diagnosis.Therefore we would like to put the emphasis on the relevance of a well-targeted high-definition ultrasound study in the following of the first imaging modality.
In optimal conditions with cooperative patient, MRI exam could demonstrate the presence of a central ductal system, with T2-weigthed and cholangio-MR sequences (6).This result could equally be obtained with endoscopic ultrasound (EUS) if HP is localized in the stomach or the duodenum, especially if abdominal US exam is difficult (obesity…) (6).At least, some authors reported the role of barium X-ray series to demonstrate nonspecific fold thickening with the characteristic appearance of a centrally umbilicated nodule in the gastric mucosa within the gastric heterotopic pancreatic rest (6).
F
.C. Deprez 1 , C. Pauls 1 , B. Coulier 2 We report the case of an 82-year-old female presenting with acute epigastric abdominal pain after traumatic blow on the epigastrum.High-resolution multimodal imaging comprising Ultrasound, CT and MR, correlation with laboratory blood analyses and a 6 months CT follow-up allowed us to make a definite diagnosis of traumatic heterotopic pancreatitis.This case emphasizes the relevance of a well-targeted high-definition ultrasound study in the following of the first imaging modality.Key-words: Pancreatitis -Abdomen, injuries.From: 1.Department of Radiology, Clinique St Pierre, Ottignies, Belgium, 2. Department of Radiology, Clinique St Luc, Bouge (Namur), Belgium.Address for correspondence: Dr F.C. Deprez, Department of Radiology, Clinique St Pierre, Avenue Reine Fabiola, B-1340 Ottignies, Belgium.E-mail: fabrice.deprez@uclouvain.be
Fig. 2 .
Fig. 2. -High-resolution ultrasound study reveals a 2.5 cm in diameter round mass (arrows) adjacent to a non-peristaltic thickened duodenojejunal flexure.A branched ductal system (asterisk) connected to the inferior wall of the duodenojejunal loop (white arrowheads) is well observed, confirming the diagnosis of ectopic pancreas.
Fig. 4 .
Fig. 4. -Contrast enhanced CT obtained 6 months after admission demonstrates spontaneous recovery, with disappearance of fluid collections.On MPR coronal oblique (A) and sagittal (B) view, ectopic pancreatic lobulation and enhancement (white arrows) is well visible and appears similar to normotopic pancreas (white arrow).HP is adjacent to the duodenojejunal flexure (black asterisks). | 2018-04-03T03:58:52.776Z | 2012-03-01T00:00:00.000 | {
"year": 2012,
"sha1": "4fb598ae80c5765780ef475dfb61380f82e4191a",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-up-j-jbsr-files/journals/1/articles/100/submission/proof/100-1-193-1-10-20150511.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4fb598ae80c5765780ef475dfb61380f82e4191a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227332551 | pes2o/s2orc | v3-fos-license | Changes in meibum composition following plaque bachytherapy for choroidal melanoma
Objectives Dry eye is common when external beam radiation is used for the treatment of choroidal melanoma (CM). As meibum structure and composition have been related to dry eye, we determined if plaque bachytherapy for CM alters meibum composition. Design 1H-NMR spectroscopy was used to measure the lipid composition of meibum. Setting The University of Louisville, Kentucky, USA. Participants All 13 participants had CM and one participant had iris melanoma. Main outcome measures Cholesteryl ester (CE) to wax ester (WE) ratio, amount of meibum esters (ME) and meibum lipid saturation were measured. Results ME decreased by 80%±18% (±99% CI) in 11 eyes that were treated compared with the contralateral untreated eye. ME increased by 181% in two eyes that were treated compared with the contralateral untreated eye. The mole % CE/WE for meibum was significantly (p<0.0001) 67% lower in eyes that were irradiated compared with control eyes from donors without CM and were not treated. Plaque brachytherapy induced the de-esterification of CE. The intensity of the meibum cis double bond resonances did not change significantly (p>0.05). Conclusion Eyes that had plaque brachytherapy had a lower amount of expressible meibum and a lower CE/WE ratio compared with meibum from the contralateral eye that received no treatment and eyes that did not have uveal melanoma. Both the quality and quantity of meibum should be considered in designing a therapy for dry eye after plaque brachytherapy.
INTRODUCTION
Uveal melanoma is the most common primary intraocular malignancy in adults. It most commonly originates in the choroid (90%) followed by the ciliary body (6%) and then iris (4%). Enucleation was the primary modality of treatment for uveal melanoma in the 1970s prior to the Colloborative Ocular Melanoma Study. 1 This large multicentre trial demonstrated the safety of plaque brachytherapy with respect to long-term mortality and tumour control. 1 Iodine-125 is the most commonly used radioisotope, and the American Brachytherapy Society recommends 0.60-1.05 Gy/hour over three to seven consecutive days. 2 Dry eye is very common, 47%, when external beam radiation is used for the treatment of uveal melanoma. 3 The incidence of dry eye was 24% when proton beam radiotherapy was used to treat uveal melanomas. 4 Clinical dry eye is much less common with plaque brachytherapy in view of the localised radiation and the posterior location in choroidal melanoma. Dry eye was reported in 8.3% of patients at an average of 20.7 months after treatment. 5 With respect to iris melanoma, only 2 out of 23 patients treated with ruthenium plaque had clinical dry eye. Even with the anterior location, the dry eye incidence was low, and it was postulated that the reason was that lacrimal gland and conjunctival goblet cells were not included in the field of irradiation, in contrast to proton beam
Key messages
What is already known about this subject? ► Dry eye is common when external beam radiation is used for the treatment of choroidal melanoma. ► Meibum structure and composition have been related to dry eye in patients that did not have plaque brachytherapy.
What are the new findings?
► The amount of expressed meibum in eyes that had plaque brachytherapy was 80% lower in 11 of 14 eyes compared with the contralateral eye. ► Both the total cholesterol moieties and cholesteryl ester were lower relative to wax ester in meibum from eyes that had plaque brachytherapy compared with the contralateral eye that received no treatment and eyes that did not have uveal melanoma. ► The intensity of the meibum cis double bond resonances did not change with plaque brachytherapy.
How might these results change the focus of research or clinical practice?
► Meibomian gland dysfunction should be considered when treating patients for dry eye after plaque brachytherapy. ► Both the quality and quantity of meibum should be considered in designing a therapy for dry eye after plaque brachytherapy.
Open access therapy or stereotactic radiation. 6 In contrast to this clinical study, histopathological evaluation of the conjunctiva following plaque brachytherapy suggested that epithelial stratification and distributional changes in ocular mucins could lead to development of dry eye. 7 Meibomian glands, which are sebaceous in nature, are more sensitive to irradiation and are more permanently altered than other sebaceous glands like glands of Zeis. 8 Following external beam radiation, irreversible structural damage to the meibomian glands have been documented in patients with orbital lymphoma. 9 Meibomian gland dysfunction (MGD) contributes to dry eye. [10][11][12][13][14][15] The meibomian gland, located in the eye lids, produce meibum, the major source of tear film lipid layer (TFLL). 16 The TFLL contributes to tear film stability. 15 1 H-NMR spectroscopy has been used to measure general meibum composition. [17][18][19][20][21][22][23][24][25][26] The relationships between meibum composition, structure and tear film stability determined using a spectroscopic approach has been reviewed. 27 Donors with dry eye due to MGD have a lower ratio of cholesteryl ester (CE) to wax ester (WE) compared with donors without dry eye. [26][27][28] In the current study, 1 H-NMR spectroscopy was used to measure the amount and general composition of meibum from patients who had plaque brachytherapy to treat uveal melanoma. The goal of the study was to determine if plaque brachytherapy induces changes in the meibum. It would also aid in understanding the pathophysiology of dry eye following radiation, to determine if it primarily aqueous deficiency or evaporative dry eye due to MGD.
Collection and processing of human meibum Meibum was collected from 14 patients who underwent plaque brachytherapy for uveal melanoama at the Department of Ophthalmology, the University of Louisville. Written informed consent was obtained from all donors. All procedures were in accord with the Declaration of Helsinki. Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research.
Meibomian glands were gently expressed by pressing the eyelid with a fingertip with strict attention to avoid touching the eyelid margin during expression. All four eyelids were expressed, and approximately 0.5 mg of meibum lipid was collected per individual for direct spectroscopic study. The expressate was collected with a platinum spatula under a slit lamp, and the pool of meibum was immediately dissolved into 0.8 mL of CDCl 3 in a 9 mm microvial with a Teflon cap (Microliter Analytical Supplies, Inc, Suwanee, Georgia, USA). Argon gas was blown over the samples to prevent oxidation. The sample in the vial was capped and frozen under argon gas until analysis. Each eye sample was collected separately. Analyses were performed within 3 weeks of collection of the sample. The samples never came in contact with any plastic to avoid plasticisers. Control CDCl 3 spectra were measured to ensure no impurities were present.
NMR measurement
Spectral data were acquired using a Varian VNMRS 700 MHz NMR spectrometer (Varian, Lexington, Massachusetts, USA) equipped with a 5 mm 1 H{13C/ 15 N} 13 C enhanced PFG cold probe (Palo Alto, California, USA). Spectra were acquired with a minimum of 250 scans, 45° pulse width and a relaxation delay of 1.000 s. All spectra were obtained at 25°C. The tetramethylsilane resonance was set to 0 ppm.
Commercial software (GRAMS 386; Galactic Industries Corp, Salem, New Hampshire, USA) was used for phasing, curve fitting and integrating.
RESULTS
Of the 14 patients who underwent plachytherapy for choroidal melanoma, 9 (64 %) were male, 13 (93 %) were Caucasian and 1 was Hispanic. The patients ages ranged from 51 to 77 years, averaging 61±9 years with a median age of 56 years. All patients had choroidal melanoma with the exception of one patient who had undergone radiotherapy for iris melanoma. 1 H-NMR resonances, characteristic of human meibum, 26 27 were resolved in the spectra of meibum from eye treated with radiation (figure 1). The total amount of meibum esters decreased by 80%±18% (±99% CI) in 11 eyes that were treated compared with the contralateral untreated eye. The total amount of meibum esters increased by 181% in two eyes that were treated compared with the contralateral untreated eye. The molar % CE/WE for meibum was significantly (p<0.0001) 67% lower in eyes that were irradiated compared with control eyes from donors without choroidal melanoma and were not treated 26 ( figure 2A). The molar % CE/ WE for meibum was significantly (p=0.023) 21% lower in eyes that were irradiated compared with the contralateral eye eyes that did not have uveal melanoma and was not treated (figure 2A). The molar % (cholesterol plus CE)/WE for meibum was significantly (p=0.0008) 38% lower in eyes that were irradiated compared with control eyes from donors that did not have uveal melanoma 26 ( figure 2A). The molar % (cholesterol plus CE)/ WE for meibum was not significantly (p>0.05) different in eyes that were irradiated compared with the contralateral eye eyes that did not have uveal melanoma and was not treated (figure 2B). The intensity of the meibum cis double bond resonances near 5.4 ppm (figure 1) did not change significantly (p>0.05) relative to the WE resonance times 3 at 4 ppm plus the cholesterol resonances at 1 and 0.66 ppm (figure 1) in irradiated eyes compared with the contralateral untreated eyes 0.30±0.03 and 0.28±0.05, respectively.
Statement of principal findings
The two major findings of this study are: the amount of expressed meibum in eyes that had plaque brachytherapy was 80% lower in 11 of 14 eyes compared with the contralateral eye, and both the total cholesterol moieties and CE were lower relative to WE in meibum from eyes that had plaque brachytherapy compared with the contralateral eye that received no treatment and eyes that did not have uveal melanoma. 26 Strengths and weaknesses of the study The advantages and disadvantages of using a spectroscopic approach to study meibum compositional, and tear film structural and functional relationships have been reviewed. 27 As meibum quantity and composition have never been measured in relationship to brachytherapy, the strengths and weaknesses in relation to other studies and important differences in results cannot be discussed. Future studies could be designed to determine the relationships between meibum composition and structure in relationship to the type of radioactive plaques (iodine-125, ruthenium-106, palladium-103 and so on) doses and dose rates and plaque position.
Meibum quantity
The amount of meibum on the eye lid surface reservoir may not be important to tear film stability and dry eye because for donors with meibomian seborrhoea or MGD, the amount of meibum on the eye lid surface was significantly higher as measured using infrared spectroscopy 29 or a meibometer. 30 Meibum quantity was discussed in Open access a review article 15 concluding that the uniformity of the spread film across the ocular surface is a far more reliable indicator than the quantity of meibum. The uniformity of the spread film can be estimated as the ratio of mean thickness to the thickness SD (based on the lipid layer thickness heterogeneity across the eye). Support for this idea comes from a study 31 where the central tear film lipid layer thickness range was 120-180 nm (range 60 nm) for healthy individuals, while 185-330 nm (range 145 nm) for dry eye patients.
The thickness of the TFLL may also not be important to tear film stability and dry eye because TFLL thickness is not related to increased tear film breakup time or a decreased thinning as discussed below. The TFLL thickness of patients with seasonal allergic conjunctivitis was thicker than controls, yet the stability of their tear film and breakup time decreased, opposite of what one would expect. 31 Furthermore, there was no correlation between TFLL thickness and non-invasive tear break-up time for 29 young 32 and 86 older 33 subjects without dry eye and 110 patients with dry eye. 34 Although the thinning rate and TFLL was significant in one study, the correlation was rather low (r about 0.3). 35 The amount of meibum expressed from the meibomian glands, as measured in the current study, could be important to tear film stability. Eyes treated with plaque brachytherapy had 80% less meibum compared with the contralateral eye. It is reasonable to speculate that with such a low amount of meibum in the gland, expression of meibum on blinking could be hindered resulting in a very thin or absent TFLL that could destabilise the tear film. It has been suggested that one needs the absence of a TFLL to observe an increase in the rate of tear evaporation. [36][37][38] Future studies are planned to test this idea.
Meibum quality
The amount of CE was much lower in treated eyes, 0.16 CE/WE (mole/mole), and was lower compared with the amount of total cholesterol moieties ((cholesterol and CE)/WE), 0.31 mole/mole. This indicates that plaque radiation may have de-esterified the CE. In this study, both the total cholesterol moieties and CE alone were lower relative to WE in meibum from eyes that had plaque treatment compared with the contralateral eye that received no treatment and eyes that did not have choroidal or iris melanoma. It is interesting that patients with dry eye due to MGD also have lower CE/WE ratios. [26][27][28] It is attractive to suggest that lower ratios of CE/WE contribute to an unstable tear film and dry eye or perhaps cause the eyes to be more susceptible to dry eye. However, there are a few patients that have normal CE/ WE ratios and have dry eye and a few patients that have no dry eye but lower levels of CE/WE. [26][27][28] The degree to which a low level of CE/WE contributes to dry eye or susceptibility to dry eye is under investigation. It is likely that in addition to lower levels of CE/WE, changes in the amount of other moieties such as saturation 38 39 and/or proteins, 40 41 phospholipids and (O-acyl)-ω-hydroxy fatty acids 42 43 contribute to tear film stability. 27 Controlled biophysical experiments studying the WE/CE impact on the properties of meibomian films are a worthy direction for further study.
In conclusion, eyes that had plaque brachytherapy had a lower amount of expressible meibum and a lower CE/ WE ratio compared with meibum from the contralateral eye that received no treatment and control eyes that did not have a melanoma. Both the quality and quantity of meibum should be considered in designing a therapy for dry eye after plaque brachytherapy.
Contributors DB was responsible for collecting and analysing data, writing and submitting the article and funding support. SFA was responsible for collecting and analysing data and editing the manuscript. AR was responsible for designing the study, collecting samples and editing the manuscript.
Funding This work has received support from the National Institute of Health R01EY026180 and an unrestricted grant from Research to Prevent Blindness Inc. New York, New York, USA, GN151619B.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not required.
Ethics approval Protocols and procedures were reviewed by the University of Louisville Institutional Review Board # 11.0319, August 2016. All procedures were in accord with the Declaration of Helsinki.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement All data relevant to the study are included in the article. | 2020-11-26T09:01:35.312Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "7a32100ccf4d216ba024a7b08c9d8feef6e4ee9c",
"oa_license": "CCBYNC",
"oa_url": "https://bmjophth.bmj.com/content/bmjophth/5/1/e000614.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58a6189ea9fa39bed7b8a35d644036be0a2c0ea2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242024448 | pes2o/s2orc | v3-fos-license | Mini-Open Subpectoral Biceps Tenodesis Using a Suture Anchor with Bone-Bridge Backup
Pathology of the long head of the biceps tendon is a known cause of anterior shoulder pain. Current surgical management options include tenotomy and tenodesis. Tenodesis can be performed arthroscopically or as an open procedure. Arthroscopic tenodesis typically uses a suprapectoral attachment, which may fail to address tendon pathology in the bicipital groove. Open tenodesis carries iatrogenic risk to neurovascular structures and a fracture risk while drilling, as well as the morbidity of an open procedure. This technique paper describes a mini-open subpectoral approach using a suture anchor and bone bridge backup for dual fixation. Use of a suture anchor instead of an interference screw reduces drill hole diameter reducing the risk of iatrogenic humeral fracture. Dual fixation provides a robust repair which may be of use for athletic patients desiring an accelerated recovery.
Introduction
P athology of the long head of the biceps tendon (LHBT) is a known cause of anterior shoulder pain. In the 1800s, Monteggia 1 and Soden 2 were among the first to identify and report on the LHBT as a source of shoulder pathology. They were followed in 1936 by Meyer, who described primary LHBT tendinopathy. 3 As understanding of LHBT pathology has advanced; so too have surgical techniques. In 1990, Patte et al. 4 discovered that spontaneous rupture of the LHBT alleviated pain in patients with massive, irreparable rotator cuff tears. They pioneered arthroscopic tenotomy of the LHBT as an effective procedure for the management of symptomatic LHBT pathology. However, tenotomy impacts the lengthetension relationship of the biceps and can result in a Popeye deformity. 5 Furthermore, tenotomy can result in loss of elbow flexion and supination strength as well as fatigue and cramping of the biceps. [5][6][7][8][9][10] These complications, as well as the development of novel techniques and devices, have contributed to the adoption of biceps tenodesis (BT). Indeed, data from the American Board of Orthopaedic Surgery indicate that the incidence of BT is increasing, outpacing tenotomy. 11 Current consensus has largely settled on tenotomy and BT as the mainstays of surgical intervention; however, there is currently no gold-standard technique for BT. [12][13][14][15][16][17] Previous BT technique papers have discussed the use of suture anchors (SA), interference screws, cortical buttons, suture bone bridges (BB), and soft-tissue tenodesis as fixation methods. [18][19][20][21][22][23][24][25][26] This technique paper is the first to describe a mini-open subpectoral tenodesis with dual fixation using a SA and BB backup.
Surgical Technique (With Video Illustration)
A demonstration of the min-open subpectoral BT with a bone bridge backup is available in Video 1. The advantages and disadvantages of this technique are provided in Table 1. Important pearls and pitfalls are provided in Table 2.
Patient Setup
The patient is positioned in the beach-chair position (Fig 1). The bony prominences are well-padded and a tourniquet is applied. The upper extremity is then prepped and draped in the usual sterile fashion.
Shoulder Arthroscopy
A posterior portal is made, and a diagnostic arthroscopy of the glenohumeral joint is performed. Any concomitant shoulder pathology is identified and addressed. To fully examine the LHBT, the operative elbow is extended with rotation and elevation of the shoulder. If the tendon is diseased but not completely ruptured, it is released proximally (Video 1).
Approach to the Bicipital Groove
The bicipital groove is palpated. A 7.5-cm incision line is marked out in the anterior axillary space at the inferior border of the pectoralis major (Fig 2). A #15 blade is used to make the incision. Dissection is performed through the subcutaneous and fascial tissue planes until the inferior border of the pectoralis major is reached. The arm is externally rotated 20 . The pectoralis major is retracted superiorly. The LHBT is palpated in the bicipital groove (Fig 3).
Release of the LHBT
Retractors are placed subperiosteally along the lateral and medial borders of the humerus. Electrocautery is used to release and remove the distal biceps tendon sheath. Right angle forceps are used to release the ruptured LHBT from the groove (Fig 4).
Suturing the LHBT
A clamp is applied to the free end of the LHBT. The tendon is whipstitched using a FiberLink suture (Arthrex, Naples, FL). The looped end of the suture is cut to create 3 free tails.
Drilling
Tendon tension for the tenodesis is approximated to the bicipital groove. The bone is prepared for drilling with electrocautery and Cobb elevator. A 5.5-mm reamer is used to drill a unicortical hole at the approximated location in the bicipital groove. The drill site for the BB backup is localized 5 mm superior to the tenodesis hole. The BB hole is drilled unicortically using a 2.4-mm drill. Sizing Tendon The tendon is sized using the sizer on the 5.5-mm SwiveLock anchor (Arthrex) (Fig 5). If the tendon is larger than 6 mm, it may need to be trimmed. The suture tails from the whipstitched tendon are loaded onto the anchor.
Passing Suture
A suture passing flag (Fig 6) is used to shuttle one end of the FiberLink suture in through the 5.5-mm hole and out through the proximal 2.4-mm hole. A nitinol micro suture lasso may be used instead for passing (Fig 7). The second FiberLink suture end is reverse-shuttled in through the 2.4-mm hole and out of the 5.5-mm hole. This results in one suture end exiting from the 2.4-mm hole and another exiting from the 5.5-mm hole (Fig 8).
Securing the Tenodesis
The 2 suture tails are tensioned to dunk the biceps tendon into the distal 5.5-mm hole. Tension is maintained on the sutures as the anchor is screwed into the 5.5-mm hole to secure the tenodesis. One free suture end is loaded onto a free needle and passed through the tendon at the tenodesis site in the bicipital groove. This is repeated with the other free suture end. The sutures are tied down onto the tendon with a surgeon's knot. The excess suture is cut completing the tenodesis procedure. The arm may be gently flexed and extended at this point to confirm the integrity of the repair.
Postoperative Care
The patient is placed in sling immobilization for 4 weeks. After the first postoperative visit, patients MINI-OPEN SUBPECTORAL APPROACH e2641 begin physical therapy with progression from passive to active-assisted to active non-resisted range of motion. Light biceps strengthening is started at 8 weeks.
Discussion
The LHBT arises from the superior glenoid labrum and the supraglenoid tubercle. It then courses intraarticularly over the head of the humerus until it enters the bicipital groove. The extra-articular portion is stabilized by a capsuloligamentous complex comprised of the coracohumeral ligament, superior glenohumeral ligament, the upper border of the subscapularis, the anterior supraspinatus. This complex forms the "biceps pulley." The extra-articular tendon can be described in 3 zones: (1) articular margin to the distal margin of subscapularis, (2) distal margin of subscapularis to proximal margin of pectoralis major, (3) the subpectoralis region. 27 This distinction is important as extra-articular LHBT lesions in zones 2 and 3 can be missed during shoulder arthroscopy resulting in persistent postoperative pain. [27][28][29] This is highlighted when considering whether to perform a suprapectoral or subpectoral tenodesis.
Arthroscopic suprapectoral biceps tenodesis (ASPBT) carries advantages. First, it is predominantly an arthroscopic procedure and thus avoids the risks of open surgery. 9,16,26,27,[30][31][32] Furthermore, ASPBT is thought to carry reduced risk of iatrogenic humeral fracture due to a larger humeral width at the tenodesis site. 17,31,33 Overmann et al. 34 reviewed 15,085 BT and reported a humeral fracture incidence of <0.1%. All fractures arose from an open subpectoral biceps tenodesis technique (OSPBT). 34 The authors suggest that drill holes in the humerus act as stress risers which decreases humeral resistance to torsional stress and increases the risk of fracture. 34 Our technique reduces this risk by using SA, which have comparable fixation to conventional interference screws and require a narrower diameter drill. 35 Another advantage of ASPBT is that sites are further away from the brachial plexus and deep brachial artery. 16,31 We address this risk by externally rotating the arm, which has been demonstrated by Dickens et al. 36 to increase the distance between the tenodesis site and the musculocutaneous nerve. Furthermore, Gifford et al. 37 reported that the risk of injuring the musculocutaneous nerve for a mini-OSPBT is minimized with limited and careful medial retraction.
Despite these advantages, some studies report the potential for persistent postoperative pain with ASPBT. Yi et al. 38 reported significant decreases in visual analog scale scores and bicipital groove tenderness at 3 months when comparing OSPBT versus ASPBT but noted no difference at final follow-up. This persistent anterior shoulder pain has led to some authors reporting increased revision rates with ASPBT. 39,40 The LHBT has been found to contain a network of sensory and sympathetic nerve fibres with greater innervation of the proximal tendon. 41 Furthermore, histological analysis by Moon et al. 28 found that 80% of LHBT demonstrated degenerative changes greater than 5 cm distal from the glenoid tubercle. ASPBT may fail to address these proximal lesions as well as underlying bicipital groove pathology thereby resulting in persistent postoperative pain and increased revision rates. However, more recent studies suggest that there is no difference in outcome between ASPBT or OSPBT. In their 2019 review of 598 patients, Hurley et al. 42 found no significant difference in outcomes between ASPBT and OSPBT. Similarly, a 2019 review of 15,527 patients undergoing BT by Forsythe et al. 43 found no significant difference in revision rates between ASPBT and OSPBT (1.8% vs. 1.9%, P ¼ .5). Furthermore, a 2020 metaanalysis by Deng et al. 44 concluded no significant difference in functional ASES and Constant scores and postoperative complications when comparing an open versus arthroscopic BT approach. Although there remains some conflict in the literature regarding the overall clinical differences between an arthroscopic versus open tenodesis approach, a cost-analysis comparing each approach concluded that an open approach was associated with lower costs with an estimates up to $5000 in cost savings. 45 In their 2018 study Liechti et al. 46 examined whether range of motion restrictions were necessary following a dual-fixation BT with a button and interference screw. Their patients (n ¼ 109) were placed in a sling after surgery, given no postoperative restrictions, and physical therapy was started immediately following surgery. 46 The authors reported a 2.2% revision rate at 3.5 year follow-up, which is comparable with the literature. 46 However, they also reported that functional outcomes were similar to other rehabilitation protocols. 46 Our technique offers a compromise between strong fixation and minimal humeral drilling providing a robust dual fixation. This is of particular interest when treating highly active patients desiring an accelerated rehabilitation and return to activity. | 2021-11-04T15:17:47.543Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "68355ffb77513e30122122b37970d98df2327d2d",
"oa_license": "CCBYNCND",
"oa_url": "http://www.arthroscopytechniques.org/article/S2212628721002383/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0e4294745ff8ef66cf768b801a76a4b7a92860c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22189618 | pes2o/s2orc | v3-fos-license | EXACT SOLUTION OF SPAN-WISE FLUCTUATING MHD CONVECTIVE FLOW OF SECOND GRADE FLUID THROUGH POROUS MEDIUM IN A VERTICAL CHANNEL WITH HEAT RADIATION AND SLIP CONDITION
The magnetohydrodynamic (MHD) convective flow of a viscoelastic, incompressible and electrically conducting fluid through a porous medium filled in a vertical channel is analyzed. The channel plate at ∗ = − is subjected to a slip-flow condition and the other at ∗ = + to a no-slip condition. The temperature of the plate at ∗ = + with no-slip condition is assumed to be varying both in space and time. The temperature difference of the walls of the channel is assumed high enough to induce heat transfer due to radiation. A magnetic field of uniform strength is applied perpendicular to the planes of the channel walls. The magnetic Reynolds number is assumed very small so that the induced magnetic field is neglected. It is also assumed that the conducting fluid is optically-thin gray gas, absorbing/ emitting radiation and non-scattering. Exact analytical solutions of the non-linear partial differential equations governing the flow problem are obtained. The velocity field, the temperature field, the amplitude and the phase angle of the skin friction and the heat transfer coefficient are shown graphically and their dependence on the various flow parameters is discussed in detail.
INTRODUCTION
In recent years, the interest in the study of flows of non-Newtonian fluids through porous medium has grown considerably because of their applications in engineering. This is mainly due to their several applications in petroleum industry, manufacturing and processing of foods, paper industry and many other industrial applications, for example filtration processes, biomechanics, packed bed reactors, insulation system, ceramic processing, enhanced oil recovery, chromatography and many others. SINGH and SINGH [1] studied MHD flow of a dusty viscoelastic liquid through a porous medium between two inclined parallel plates. HAYAT et al. [2] discussed analytical solution for MHD transient rotating flow of a second grade fluid in a porous space. TIWARI and RAVI [3] studied analytically the transient rotating flow of a second grade fluid in a porous medium. Heat transfer aspect to MHD oscillatory viscoelastic flow in a channel filled with porous medium is presented by CHOUDHARY and DAS [4]. GHOSH and SHIT [5] analyzed mixed convection MHD flow of viscoelastic fluid in a porous medium past a hot vertical plate. CHOUDHURY et al. [6] investigated visco-elastic free convective flow past a vertical porous plate through a porous medium with suction and heat source.
The problems of flow of non-Newtonian fluids offer varied challenges to applied mathematicians, numerical analysts and modelers in developing suitable algorithms for computing the flows. From literatures, the non-Newtonian fluids are principally classified on the basis of their behavior in shear. A fluid with a linear relationship between the shear stress and the shear rate, giving rise to a constant viscosity, is always characterized to be a Newtonian fluid. The equations that describe flows of Newtonian fluid are the Navier-Stokes equations. The exact solutions for Navier-Stokes equation are rare. Based on the knowledge of solutions to Newtonian fluid, the different fluids can be extended, such as Maxwell fluid, Voigt fluid, Oldroyd-B fluid, Rivlin-Ericksen fluid or power-law fluid. RAPTIS and TAKHAR [7] studied heat transfer from flow of an elastico-viscous fluid. HAYAT et al. [8] obtained solution of MHD flows of an Oldroyd-B fluid. MEHTA and RAO [9,10] discussed buoyancy induced flow of non-Newtonian fluids over a non-isothermal horizontal plate embedded in a porous medium and with non-uniform surface heat flux. Due to the complexity of fluids, several constitutive equations of non-Newtonian fluids have been proposed in the literature. Amongst these there is a subclass of non-Newtonian fluids namely the second grade fluids for which one can reasonably hope to obtain analytical solution. In the case of differential type fluids, the equations of motion are one order higher than the Navier-Stokes equations and, thus, the adherence boundary condition is insufficient to determine the solution completely (see refs. HAYAT et al. [11,12] for a detailed discussion of the relevant issues). Because of this fact equations governing flow of non-Newtonian fluids are much more complicated. Therefore, the class of exact solutions further narrowed down for non-Newtonian fluids. RAJGOPAL and GUPTA [13] obtained an exact solution for the flow of a non-Newtonian fluid past an infinite porous plate. Another exact solution of non-Newtonian fluid flows with prescribed vorticity is obtained by LABROPULU [14]. FETECAU and ZIEREP [15] presented a study on a class of exact solutions of the equations of motion of a second grade fluid. An exact solution of flow problem of a second grade fluid through two porous walls is arrived at by ARIEL [16]. KHAN et al. [17] obtained new exact solutions for an Oldroyd-B fluid in a porous medium. SINGH [18] analyzed another exact solution of viscoelastic mixed convection MHD oscillatory flow through a porous medium filled in a vertical channel.
The wall slip flow is another very important phenomenon that is widely encountered in this era of industrialization. It has numerous applications, for example in lubrication of mechanical devices where a thin film of lubricant is attached to the surface slipping over one another or when the surfaces are coated with special coatings to minimize the friction between them. By lubricating or coating the solid surface the fluid particles adjacent to it no longer move with the velocity of the surface but has a finite tangential velocity and, hence slips along the surface. TICHY [19] analyzed non-Newtonian lubrication with the convected Maxwell model. A number of scholars have shown their interest in the phenomenon of slip-flow regime due to its wide ranging applications. MARQUES et al. [20] have considered the effect of the fluid slippage at the plate for Couette flow. RHODES and ROULEAU [21] studied the hydrodynamic lubrication of partial porous metal bearings. The problem of the slip-flow regime plays a very important role in modern science, technology and vast ranging industrialization. In view of the practical applications of the slip-flow regime it remained of paramount interest for several scholars e.g. SHARMA and CHAUDHARY [22]; SHARMA [23]; JAIN and GUPTA [24]. KHALED and VAFAI [25] obtained exact solutions of oscillatory Stokes and Couette flows of Newtonian fluids under slip flow condition. MEHMOOD and ALI [26] also obtained an exact solution for the unsteady MHD oscillatory flow of a viscous fluid in a planer channel to study the effect of slip condition. Recently, SINGH [27] studied an oscillatory MHD forced convection flow of an electrically conducting, viscous incompressible fluid through a porous medium in a vertical channel under slip condition. He [28] further obtained an exact solution of an oscillatory fully developed MHD convection flow through a porous medium in a vertical porous channel in slip flow regime.
A number of studies have also appeared in the literature for the flows of non-Newtonian fluids in slip-flow regime. HAYAT et al. [29] studied slip flow and heat transfer of a second grade fluid past a stretching sheet through a porous space. SIDDIQUI et al. [30] analyzed effect of slip condition on unsteady flows of an Oldroyd-B fluid between parallel plates. AHMED and TALUKDAR [31] The aim of the present study is to formulate and analyze the flow problem of viscoelastic (second grade), incompressible and finitely electrically conducting fluid through a porous medium bounded by two infinite vertical plates in the presence of heat radiation. The temperatures of channel plates with no-slip condition and with slip-condition respectively remain span-wise cosinusoidal and constant as shown in figure 1a,b. A magnetic field of uniform strength is applied transverse to the flow and the magnetic Reynolds number is assumed very small so that the induced magnetic field is neglected. It is also assumed that the conducting fluid is optically-thin gray gas, absorbing/ emitting radiation and non-scattering. An exact solution of the mathematical problem so formed is obtained and the final results for the velocity, temperature, shear stress and heat transfer coefficient in terms of their amplitudes and phase angles are discussed in the last section of the paper.
BASIC EQUATIONS
In order to derive basic equations for the problem under consideration following assumptions are made: (i) The flow considered is unsteady and laminar between two infinite electrically nonconducting vertical plates. (ii) The fluid is second order viscoelastic finitely conducting and with constant physical properties. (iii)A magnetic field of uniform strength is applied normal to the flow. (iv) The magnetic Reynolds number is taken small enough so that the induced magnetic field is neglected. (v) Hall effect, electrical and polarization effects are neglected. (vi) It is assumed that the fluid is optically-thin gray gas, absorbing/ emitting radiation and non-scattering. Under these assumptions, we write hydromagnetic equations of continuity, motion and energy as: where in equation (2) T is Cauchy stress tensor and the constitutive equation derived by COLEMAN and NOLL [35] for an incompressible homogeneous fluid of second order is Here −$ % & is the interdeterminate part of the stress due to constraint of incompressibility, ' % , ' and ' ) are the material constants describing viscosity, elasticity and cross-viscosity respectively. The kinematic ( % and ( are the Rivelen Ericson constants defined as where ∇ denotes the gradient operator and d/dt the material time derivative. According to MARKOVITZ and COLEMAN [36] the material constants ' % , ' ) are taken as positive and ' as negative.
On the right hand side of equation (2) the last term = 12! * accounts for the force due to buoyancy and the second last term is the Lorentz force due to magnetic field B given by Here V is the velocity vector, B is the magnetic field, J is the current density, is the density, c p is the specific heat at constant temperature, k is the thermal conductivity, σ is electric conductivity and q * is the heat radiation.
FORMULATION OF THE PROBLEM
We consider an unsteady flow of a viscoelastic, incompressible and electrically conducting fluid in a hot vertical channel filled with porous medium. A schematic diagram of the physical problem with span-wise cosinusoidal variation of plate temperature is shown in Figures 1a &1b. The two parallel stationary walls of the channel are distance 'd' apart. A Cartesian coordinate system (X * , Y * ) is chosen such that X * -axis directed upwards lies along the centerline of the channel and Y * -axis is perpendicular to the planes of parallel plates. A magnetic field B 0 of uniform strength is applied transversely along Y * -axis. Since the walls of the channel are considered non-porous, thus, the integration of the continuity equation (1) implies that v * = 0. All the physical quantities except pressure are independent of x * for this fully developed laminar flow in the infinite vertical channel. The temperature of the plate at * = 4 5 varies span-wise cosinusoidally as ! * = ! % + ! − ! % cos* 9: * 4 ;< * * ..
Then taking into account the usual Boussinsq's approximation, the forced and free convection flow is governed by the following differential equations: Momentum equation; where in momentum equation (8) A * = 0, ! * = ! % + ! − ! % cos* 9: * 4 ;< * * . MN * = 4 5 , For the case of an optically thin gray gas the local radian t is expressed by where M * is the mean absorption coefficient and 3 * is the Stefan-Boltzmann constant.
We assume that the temperature differences within the flow are sufficiently small such that T *4 may be expressed as a linear function of the temperature. This is accomplished by expanding T *4 in a Taylor series about T 1 and neglecting higher order terms, thus Substituting (13) into (12) and simplifying, we obtain " * # * = 16M * 3 * ! % ) ! * − ! % .
Further, substitution of (14) into the energy equation (9) in equations (8), (15), (10) and (11) we obtain governing equations and the boundary conditions in dimensionless form as with boundary conditions
SOLUTION OF THE PROBLEM
In order to obtain the solution of this unsteady problem it is convenient to adopt complex variable notations for velocity, temperature and pressure. The real part of the solution will have physical significance. Thus, we write velocity, temperature and pressure as A , Z, N = A l p q rs; , ^ , Z, N =^l p q rs; , − ? = (p q rs; , (21) where A is a constant.
The boundary conditions in equations (19) and (20) can also be written in complex notations as Substituting expressions (21) into equations (17) and (18), we get where = w1 − x\_,u = wh +` + a ;% − x\ 1 + _h and v = w h + f − x\dc with transformed boundary conditions The ordinary differential equations (24) and (25) are solved under boundary conditions (26) and (27) and the solutions for the velocity and the temperature fields are obtained, respectively, as From the velocity field in equation (28) we can obtain the skin-friction at the left wall, ™ o , in terms of its amplitude |›| and the phase angle oe as where The amplitude and the phase angle of the skin-friction ™ o are respectively given by |›| = n› O + › q , and oe = tan ;% B § § © C .
From the temperature field given in equation (29)
RESULTS AND DISCUSSION
An unsteady MHD convective flow of a viscoelastic fluid through a porous medium in a vertical channel under slip flow condition is analyzed. The closed form solutions for the velocity and temperature fields are obtained analytically and then evaluated numerically for different values of parameters appeared in the equations. To have better insight of the physical problem the variations of the velocity, temperature, skin-friction rate of heat transfer in terms of their amplitudes and phase angles with the parameters like viscoelastic parameter γ, Grashof number Gr, Hartmann number M, permeability of the porous medium K, Prandtl number Pr, radiation parameter N, pressure gradient A and the frequency of oscillations \ are then shown graphically to assess the effect of each parameter.
The velocity variations with these parameters over the width of the channel are presented in Figure 2. The curve I (blue) corresponds the case of no-slip conditions at both the plates of the channel i.e. when the slip-flow parameter h = 0. Curve II (green) represents the case of Newtonian fluid i.e. when viscoelastic parameter γ = 0. Rests of all the curves are compared with curve III (red). Comparison of curves IV, VI, VIII and XI with red curve III clearly shows that the velocity increases with the increase of slip flow parameter γ, Grashof number Gr, permeability of the porous medium K and the favorable pressure gradient A. The increasing slip-flow parameter clearly means that the increasing tangential velocity at the wall give rise to the velocity in the channel. The increase of velocity with increasing Grashof number, physically, means the increase of buoyancy force because of which velocity increases. The maximum of the velocity profiles for increased Grashof number shifts toward right half of the channel due to the greater buoyancy force in this part of the channel because of the presence of hotter plate otherwise the velocity remains parabolic with its maxima almost at the center of the channel with the increase of all other parameters. The increase of permeability of the porous medium K implies that the resistance posed by the porous matrix reduces and, thus, the velocity increases. It is also very natural that the flow will be faster for increased favorable pressure gradient. Similarly the comparison of rest of the curves namely V, VII, IX, X and XII with red curve III reveals that the velocity decreases with increasing viscoelastic parameter γ, Hartmann number M, Prandtl number Pr, radiation parameter N and the frequency of oscillations ω. The velocity decreases with the increasing Hartmann number means that the flow retards with the increasing Lorentz force due to increasing magnetic field strength. Since the Prandtl number gives the relative importance of viscous dissipation to the thermal dissipation, therefore, for larger Prandtl number viscous dissipation is predominant and due to this velocity decreases. Thus, the velocity in the case of water (Pr = 7) is less than that in the case of air (Pr = 0.7).
The amplitude |›|of the skin-friction against the frequency of oscillations is presented in Figure 3 for different sets of parameter values. In this figure comparison of curves IV, VI and IX with the dashed curve I (---) reveals that the amplitude increases with the increase of Grashof number Gr, permeability of the porous medium K and the pressure gradient parameter A. However, the comparison of curves II, III, V, VII and VIII shows that |›| decreases with the increase of slip-flow parameter h, viscoelastic parameter γ, Hartmann number M, Prandtl number Pr and the radiation parameter N. The amplitude goes on decreasing with increasing frequency of oscillations ω. The phase angle oe of the skin-friction is presented in Figure 4 against the frequency ω of oscillations. The comparison of curves III, IV and VI with the dashed curve I (---) in this figure exhibits that the phase angle of the skinfriction increases with increase of viscoelastic parameter γ, Grashof number Gr, and permeability K of the porous medium while the phase angle decreases with the increase of slip-flow parameter h, Hartmann number M, Prandtl number Pr, radiation parameter N and pressure gradient A as is very clear by the comparison of curves II, V, VII, VIII and IX with the dashed curve I (---). It is depicted in Fig. 4 that there is always a phase lead and it goes on increasing with increasing frequency of oscillations ω.
The temperature profiles are shown in Figure 5. The figure clearly depicts that the temperature decreases with the increase of each of the parameters i.e. Prandtl number Pr, radiation parameter N and the frequency of oscillations ω.The amplitude |ª| and phase angle ψ of the rate of heat transfer are shown in Figures 6 and 7 respectively. It is clear from Figure 6 that the amplitude decreases with the increase of Prandtl number and the radiation parameter. There is a sharp decrease in amplitude for the case of water (Pr=7) than the case of air (Pr=0.7). However, the amplitude remains the same for large values of radiation for increasing frequency of oscillations ω. Figure 7 shows that with increasing oscillations \ the phase angle « of the rate of heat transfer oscillates between the phase lag and the phase lead but for increased radiation parameter there is always a phase lead and the phase angle remains linear.
CONCLUSIONS
From the discussion above following conclusions are made:- The velocity increases with the increase of slip-flow parameter h, Grashof number Gr, permeability of the porous medium K and the favourable pressure gradient A. But the velocity decreases with increasing viscoelastic parameter γ, Hartmann number M, Prandtl number Pr, radiation parameter N and the frequency of oscillations ω. The amplitude of the skin-friction also increases due to the increase of all those parameters because of which velocity increases and decreases with the increase of other parameters because of which velocity decreases. The phase angle of the skin-friction increases with increase of viscoelastic parameter γ, Grashof number Gr, and porous medium permeability K but decreases with the increase of slip parameter h, Hartmann number M, Prandtl number Pr, radiation parameter N and pressure gradient A. There is always a phase lead of the skin-friction and it goes on increasing with increasing frequency of oscillations ω. The temperature decreases with the increase of each of the parameters. The amplitude of rate of heat transfer is less in water (Pr=7) than in air (Pr=0.7). The phase of heat transfer oscillates between the phase lag and the phase lead. | 2017-10-11T05:26:28.215Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "302b9f559dbf65352ad585896dbaea463f5540fb",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/1450-9636/2015/1450-96361537065D.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "302b9f559dbf65352ad585896dbaea463f5540fb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Biology"
]
} |
108283439 | pes2o/s2orc | v3-fos-license | Influence of CeO 2 Addition to Ni – Cu / HZSM-5 Catalysts on Hydrodeoxygenation of Bio-Oil
Hydrodeoxygenation (HDO) of bio-oil is a method of bio-oil upgrading. In this paper, x%CeO2–Ni–Cu/HZSM-5 (x = 5, 15, and 20) was synthesized as an HDO catalyst by the co-impregnation method. The HDO performances of x%CeO2–Ni–Cu/HZSM-5 (x = 5, 15, and 20) in the reaction process was evaluated and compared with Ni–Cu/HZSM-5 by the property and the yield of upgrading oil. The difference of the chemical composition between bio-oil and upgrading oil was evaluated by GC-MS. The results showed that the addition of CeO2 decreased the water and oxygen contents of upgrading oil, increased the high heating value, reduced acid content, and increased hydrocarbon content. When the CeO2 addition was 15%, the yield of upgrading reached the maximum, from 33.9 wt% (Ni–Cu/HZSM-5) to 47.6 wt% (15%CeO2–Ni–Cu/HZSM-5). The catalytic activities of x%CeO2–Ni–Cu/HZSM-5 (x = 5, 15, and 20) and Ni–Cu/HZSM-5 were characterized by XRD, N2 adsorption–desorption, NH3-Temperature-Programmed Desorption, H2-Temperature-Programmed Reaction, TEM, and XPS. The results showed that the addition of CeO2 increased the dispersion of active metal Ni, reduced the bond between the active metal and the catalyst support, increased the ratio of Bronsted acid to total acids, and decreased the reduction temperature of NiO. When the CeO2 addition was 15%, the activity of catalyst reached the best. Finally, the carbon deposition resistance of deactivated catalysts was investigated by a Thermogravimetric (TG) analysis, and the results showed that the addition of CeO2 could improve the carbon deposition resistance of catalysts. When the CeO2 addition was 15%, the coke deposition decreased from 41 wt% (Ni–Cu/HZSM-5) to 14 wt% (15%CeO2–Ni–Cu/HZSM-5).
Introduction
Biomass is one of the most promising renewable energy sources for supplementing traditional fossil fuels; at present, biomass energy accounts for 10-14% of the world's energy supply [1].A promising method for producing alternative fuel is to convert biomass into bio-oil.However, bio-oil composition is complex, its chemical properties are unstable, and it requires further modification and upgrades to become a high-quality liquid fuel.Presently, one of the most effective means of upgrading is the catalytic hydrogenation of bio-oil, in which catalyst selection plays a key role.
In recent years, increasing attention has been paid to Ni-based catalysts with a relatively high activity.Zhang et al. [2] prepared a series of catalysts by loading Ni onto molecular sieves (HZSM-5 and Al 2 O 3 ).Phenol was selected as a model compound to observe the hydrogenation activity of the catalysts.Their results showed that when the Ni loading was 10 wt% and the reaction temperature was 240 • C, the activity of the catalyst was the highest and the conversion efficiency of phenol reached the highest value (91.8%).Zhang et al. [3] used the impregnation method to load different Ni contents on the mixed carriers of Al 2 O 3 -SiO 2 , Al 2 O 3 -TiO 2 , TiO 2 -SiO 2 , and TiO 2 -ZrO 2 .The catalyst was tested in reaction with guaiacum, a model compound of bio-oil.The experimental results showed that Ni/TiO 2 -ZrO 2 catalysts exhibited a better hydrodeoxygenation (HDO) activity.The conversion of bio-oil was 19.3%, the pH changed from 2.4 to 4.2, and the water content decreased remarkably from 51.4% to 1.5%.In addition, the Ni-Cu bimetallic catalyst was prepared by considering the advantages and disadvantages of various catalysts.The experimental results also showed that the HDO effect of the Ni-Cu bimetallic catalyst was significantly improved.Yao and Goodman [4] loaded Cu onto Ni metal (with hydrogen atoms) and applied the temperature-rising desorption method for catalyst characterization.They found that the hydrogen atoms adsorbed on Ni metal overflowed onto the surface of the Cu atoms, which provided a new idea for the design of catalysts.In particular, a small amount of active metals (i.e., Cu) was added to the transition metals (i.e., Ni) to produce Ni-Cu alloys, which not only improved the dispersion of the active metal Ni but also reduced the binding energy between the active metals and the hydrogen atom.The method allowed the activated hydrogen to easily be desorbed from the catalyst surface and to enter the reaction system, thus effectively improving the hydrogenation activity of the catalyst.A small amount of Cu can serve as hydrogen storage, and it can also decrease the reduction temperature of the metal, thereby laying a foundation for the preparation of an anti-coke catalyst in the hydrogenation and deoxidation of bio-oil.Ardiyant et al. [5] prepared a series of bimetallic catalysts by impregnating different Ni-Cu contents onto δ-Al 2 O 3 , in which the total metal content was 20 wt%.The effects of different catalysts on bio-oil HDO were studied in the Ni/Cu mass ratio range of 0.32-8.1.Their results showed that the content of aliphatic hydrocarbons in the products of Ni-Cu bimetallic catalysts was significantly higher than that of single-metal Ni catalysts.When the Ni/Cu ratio reached 8, the yield of the oil phase was 40%, the oxygen removal efficiency was 57%, and the hydrogen consumption of the catalyst was the highest.The catalytic activity was optimal in the process of bio-oil catalytic hydrogenation.However, although Ni-Cu-based catalysts can significantly improve the HDO of bio-oil, the coke deposition on the catalysts remains a difficult problem.Coke deposition is the main cause of catalyst deactivation [6][7][8][9][10].The bio-oil hydrogenation process involves the polymerization and condensation of unsaturated components, and the macromolecule compounds generated will deposit on the surface and the pore of the catalyst, thus forming a coke deposit.Coke deposits can cover the active sites of catalysts and hinder the contact between the reactants and active sites of catalysts.The catalyst will then be deactivated.Li et al. [11] studied the coke deposition during the HDO process, and the results showed that coke deposits formed at high temperatures were more difficult to be desorbed.Zhang et al. [12] studied the effect of reaction temperature on deoxidation in the HDO process and found that deoxygenation of bio-oil was a function of time and temperature with the increase of temperature resulting in decreasing deoxygenation.Therefore, the reaction temperature of deoxyhydrogenation should be controlled within a suitable range.
Rare earth metal oxides (e.g., CeO 2 and LaO 3 ) can significantly improve the activity, stability, and anti-coking ability of nickel-based catalysts.The addition of a small amount of rare earth oxides in the catalyst as a promoter has received considerable attention from different researchers [13,14].CeO 2 , as an effective redoxagent, can convert between Ce 4+ and Ce 3+ to achieve the conversion between oxygen and oxygen vacancies, resulting in a good oxygen storage capacity [15].Thus, this work explores the effect of dispersion, particle size, and the reduction ability of the active component of Ni with the addition of CeO 2 based on Ni-Cu/HZSM-5 (Ni:Cu = 8:1).
Materials
All reagents used were of analytical grade.Toluene, n-butyl alcohol, ethanol, nickel nitrate, cupric nitrate, and cerium nitrate were obtained from Sinopharm Chemical Reagent Co. HZSM-5 zeolite (Si/Al = 50) was purchased from Nankai University Catalyst Co. Bio-oil was produced by the fast pyrolysis of rice husk at 500 • C in a self-made fluidized bed reactor at the Institute of Environmental Sciences, Zhengzhou University, China.
Preparation of Catalyst
The CeO 2 -Ni-Cu/HZSM-5 catalyst was prepared by the co-impregnation method.First, the load of Ni-Cu (8:1) was 15%.X g of Ce(NO 3 ) 2 •6H 2 O (X = 0.5, 1.0, and 1.5), 2.643 g of Ni(NO 3 ) 2 •6H 2 O, and 0.2534 g of Cu(NO 3 ) 2 •3H 2 O were mixed and stirred with 60 mL of deionized water.Then, 4 g of HZSM-5 was added to the solution, which was then heated in a water bath at 60 • C and evaporated to a paste state.Subsequently, the specimen was dried at 110 • C for 12 h and then calcined at 400 • C for 2 h.Since the decomposition temperature of cerium nitrate is about 200 • C and that of nickel nitrate and cupric nitrate about 300 • C, to ensure a complete decomposition of these compounds into oxides, we used 400 • C for the calcination of our catalyst.This calcination temperature of 400 • C further ensured the stability of the catalyst at our reaction temperature (up to 330 • C).Finally, the catalyst was placed in a quartz tube, was mixed with hydrogen and argon (Ar:H 2 = 1) into a quartz tube, was increased to 460 • C at a rate of 5 • C/min, and then was maintained at 460 • C for 2 h.Thus, NiO in the catalyst was reduced to nickel in hydrogen atmosphere.The specimen used in the experiments was Ni-Cu/HZSM-5 with 0%, 5%, 15%, and 20% CeO 2 and labeled as Ni-Cu/HZSM-5, 5%CeO 2 -Ni-Cu/HZSM-5,15%CeO 2 -Ni-Cu/ HZSM-5, and 20%CeO 2 -Ni-Cu/HZSM-5, respectively.
Experimental Device and Process
The catalytic HDO reaction of the bio-oil was conducted in a 500 mL stainless autoclave (Weihai Automatic Control Co.).The temperature control system was composed of an electric heating sleeve and a thermocouple.Adding polar solvents (toluene and n-butanol) and supercritical fluid (n-butanol) to bio-oil can improve the homogenization of bio-oil and can reduce coke deposition in the reaction process [16,17].Furthermore, using toluene and n-butanol as solvents can enhance the fluidity and thermal conductivity of bio-oil and can reduce the coke deposition of the catalyst in the reaction process [16,17].Before the reaction, 60 g of bio-oil, 20 g of toluene, 20 g of n-butanol, and 5 g of catalyst were added into the reaction vessel.Then, the air in the reactor was replaced with hydrogen.After the replacement, the initial hydrogen pressure was 2 MPa at room temperature.The heating rate of the reactor was 3 • C/min.After the reaction, the reactor was cooled to room temperature, and the catalyst was filtered from the liquid phase and washed with alcohol for further analysis.The liquid phase was then separated via a separator funnel into two phases (i.e., upgrading oil and water phase).The experiment was repeated three times, and the data were the average of these three runs.The upgrading oil yield (Y) is calculated as follows [18]: where m product and m feed denote the mass of the upgrading oil and the feed-stock, respectively.
Bio-Oil Analysis
The water content in the sample was determined by using a KF-1A semiautomatic moisture analyzer (Shanghai Baoshan Seiko Electronic Instrument Factory).The heating value of the bio-oil was measured by a ZDHW-6000 automatic calorimeter (Hebi, Henan).
The elemental analysis of the bio-oil was conducted by using a Flash EA 1112 analyzer (Thermo Electron Co.).GC-MS was performed using Agilent 7890A-5975C equipped with a VF-1701ms capillary column (30 m × 0.25 mm × 0.25 µm).The GC split was 1:100, and the injector temperature was set to 250 • C; the injection volume was 1 µL (samples were diluted 10-fold with CH 2 Cl 2 ).The oven temperature was maintained at 50 • C for 3 min, increased to 200 • C at a rate of 3 • C/min, and then maintained at 200 • C for 50 min.
Catalyst Characterization
XRD was performed by using a D500 powder diffractometer (Siemens, Germany).Cu-Kα was a used as the ray source with a step length of 0.05 • , a voltage of 35 kV, a current of 30 mA, and a scanning range of 10 • < (2θ) < 90 • .TEM was obtained by using a Tecnai G2 20S-TWIN projective electron microscope (FEI Company, Holland) with an acceleration voltage 220 kV.
N 2 adsorption-desorption was used to determine the specific surface area and the pore type of the catalyst.The calculation of the specific surface area was based on the Brunauer-Emmett-Teller equation, and the calculation of the pore volume was based on the t-plot method.
NH 3 -Temperature-Programmed Desorption (TPD) was carried out in a quartz tube reactor with a thermal conductivity detector (TCD).Of the sample, 100 mg was pretreated in a flow of helium (30 mL/min) at 400 • C for 1 h, and after cooling to 100 • C, ammonia adsorption was carried out.Subsequently, excessive physisorbed ammonia was removed by purging with helium at 100 • C for 2 h.
H 2 -Temperature-Programmed Reaction (TPR) was performed on an Autosorb-iQ/Autosorb-iQ (3P Instruments, Germany) fully automatic gas meter.Of the sample, 70 mg was pretreated in a flow of Ar (20 mL/min) at 200 • C for 30 min and, after cooling to room temperature, was switched to a reducing gas flow of 5% H 2 -Ar (40 mL/min) with a temperature programmed reduction from room temperature to 950 • C with a 10 • C/min temperature programmed.Measurement of the H 2 content changed in the tail gas by TCD after dehydration in a cold trap (−85 • C).
XPS was performed with an ESCALAB 250 spectrometer (Thermo Fisher Scientific, USA) to determine the surface elemental composition and valence of each element.
A TG analysis was performed on an STA 449 PC thermal analyzer (NETZSCH-Gerätebau GmbH, Germany) by using 5 mg of the sample at an air flow rate of 60 mL/min and a ramp rate of 10 • C/min.The instrument used has interchangeable sensors for simultaneous Differential Scanning Calorimetry (DSC) measurements.
Effect of CeO 2 on the Catalytic Yield
The CeO 2 content of Ni-Cu/HZSM-5 is not the only important factor affecting the HDO process.The reaction temperature also has a significant effect on the catalytic activity.Here, we describe the HDO effect of catalysts with different amounts of CeO 2 and the HDO effect with specific CeO 2 contents under different reaction temperatures.
Upgrading Oil Yield
Figure 1 shows the yield (Y) change of upgrading oil at 270 • C for 1 h with different catalysts.Y is higher with the addition of CeO 2 than that without CeO 2 , and Y increases first and then decreases with increasing CeO 2 added content.With 15% CeO 2 -Ni-Cu/HZSM-5, Y can reach the highest value, accounting for 47.6 wt%.
To further explore the effect of reaction temperature of the Ni-based catalysts with 15% CeO 2 for the HDO of bio-oil, we used 15%CeO 2 -Ni-Cu/HZSM-5 at different temperatures.As shown in Figure 2, Y starts to decrease with the increase of reaction temperature, and Y declines rapidly when the temperature is between 270 and 300 • C. To further explore the effect of reaction temperature of the Ni-based catalysts with 15% CeO2 for the HDO of bio-oil, we used 15%CeO2-Ni-Cu/HZSM-5 at different temperatures.As shown in Figure 2, Y starts to decrease with the increase of reaction temperature, and Y declines rapidly when the temperature is between 270 and 300 °C.
Main property of bio-oil
The property of bio-oil after HDO is an important index for evaluating catalytic activity.Table 1 lists the main properties of bio-oil and upgrading oil at 270 °C for 1 h with different catalysts.Compared with those of the bio-oil, the main properties of the upgrading oil with different catalysts improved in varying degrees.The water and oxygen contents decreased significantly with significantly increasing high heating value.When the CeO2 content was 15%, the oxygen content reached the lowest (21.7%), indicating that the HDO effect was at the optimum at this time.At 15% CeO2, the water content of the oil phase was slightly high and the high heating value was slightly low, but the difference was negligible.Considering the oil yield and the effect of HDO, the Ni-Cu/15% CeO2-HZSM-5 catalyst appears to be suitable for the HDO process.To further explore the effect of reaction temperature of the Ni-based catalysts with 15% CeO2 for the HDO of bio-oil, we used 15%CeO2-Ni-Cu/HZSM-5 at different temperatures.As shown in Figure 2, Y starts to decrease with the increase of reaction temperature, and Y declines rapidly when the temperature is between 270 and 300 °C.
Main property of bio-oil
The property of bio-oil after HDO is an important index for evaluating catalytic activity.Table 1 lists the main properties of bio-oil and upgrading oil at 270 °C for 1 h with different catalysts.Compared with those of the bio-oil, the main properties of the upgrading oil with different catalysts improved in varying degrees.The water and oxygen contents decreased significantly with significantly increasing high heating value.When the CeO2 content was 15%, the oxygen content reached the lowest (21.7%), indicating that the HDO effect was at the optimum at this time.At 15% CeO2, the water content of the oil phase was slightly high and the high heating value was slightly low, but the difference was negligible.Considering the oil yield and the effect of HDO, the Ni-Cu/15% CeO2-HZSM-5 catalyst appears to be suitable for the HDO process.
Main property of bio-oil
The property of bio-oil after HDO is an important index for evaluating catalytic activity.Table 1 lists the main properties of bio-oil and upgrading oil at 270 • C for 1 h with different catalysts.Compared with those of the bio-oil, the main properties of the upgrading oil with different catalysts improved in varying degrees.The water and oxygen contents decreased significantly with significantly increasing high heating value.When the CeO 2 content was 15%, the oxygen content reached the lowest (21.7%), indicating that the HDO effect was at the optimum at this time.At 15% CeO 2, the water content of the oil phase was slightly high and the high heating value was slightly low, but the difference was negligible.Considering the oil yield and the effect of HDO, the Ni-Cu/15% CeO 2 -HZSM-5 catalyst appears to be suitable for the HDO process.The effect of HDO increased with the rise of reaction temperature, and the water and oxygen contents decreased with the increase of temperature (Table 2), indicating that the HDO effect of bio-oil gradually increased with temperature.However, the amount of coke deposited on the catalyst after the reaction increased rapidly at the reaction temperature from 270 to 300 • C. The coke content on the surface of the catalyst in the bio-oil HDO process was measured by TG analysis.The weight losses of the 15% CeO 2 -Ni-Cu/HZSM-5 catalyst at 250, 270, 300, and 330 • C for 1 h were 7, 15, 48, and 54%, respectively (Figure 3).The carbon depositions on the 15% CeO 2 -Ni-Cu/HZSM-5 catalyst were 7%, 15%, 48%, and 54% (wt%) after reactions at 250 • C, 270 • C, 300 • C, and 330 • C for 1 h, respectively.Temperature is one of the main factors affecting coke deposition, and it causes the deactivation of the catalyst.Y decreased rapidly between 270 • C and 300 • C. From the energy utilization perspective, 270 • C is the suitable temperature for HDO when the CeO 2 content is 15% of the Ni-Cu/HZSM-5 catalyst.
GC-MS Analysis
In order to explore the changes of chemical compositions in bio-oil before and after an HDO reaction, the composition of bio-oil and upgrading oil was determined by GC-MS.The results obtained are sorted out as shown in Table 3.The main components of bio-oil before HDO were phenols, ketones, and acids; after an HDO reaction with different catalysts, some chemical compositions changed in the upgrading oil.For the upgrading oils, their common changes were (1) that the content of acid, alcohol, and ketone decreased significantly and (2) that the content of esters and hydrocarbons increased significantly.For the upgrading oil with 15%CeO 2 -Ni-Cu/HZSM-5, the content of acid was the lowest, while the esters content reached the highest and the hydrocarbons also reached the maximum.The diffraction peaks for each catalyst near 6.7 • and 24.8 • were due to the catalyst support (HZSM-5, PDF# 42-0305), and the peak near 46.5 • was due to the Ni-Cu/HZSM-5 catalyst without CeO 2 (Figure 4).These peaks were expected to be the Ni-Al compound formed between the metal and the catalyst support (Ni-Al PDF# 20-0019).However, the addition of CeO 2 into the Ni-Cu/HZSM-5 catalyst inhibited the presence of the Ni-Al compound phase, in which the intensity of the peak (near 46.5 • ) was remarkably decreased.The addition of CeO 2 can seemingly reduce the bond between the active metal and the catalyst support and can improve the catalytic activity of the catalyst.The crystal size was calculated by the Scherrer formula (Table 4).The crystal size decreased first and then increased with the increase of the CeO 2 content.When the content of CeO 2 was 15%, the average particle size of the catalyst was the smallest at 19.2 nm.
TEM
The catalyst image before an HDO reaction was captured by TEM (see Figure 5a-d for CeO2 contents of 0, 5, 15, and 20%, respectively).The average size of the catalyst with CeO2 is smaller than that without adding CeO2.A suitable CeO2 (e.g., 15%) content can effectively inhibit grain growth and, hence, provide a better dispersion of Ni with the smallest particle size (Figure 5).However, at a higher 20% CeO2 content, the aggregation of Ni appears due to excessive CeO2 accumulation on the pore/surface of the catalyst.Therefore, the catalyst with a 15% CeO2 addition is more suitable as the
TEM
The catalyst image before an HDO reaction was captured by TEM (see Figure 5a-d for CeO 2 contents of 0, 5, 15, and 20%, respectively).The average size of the catalyst with CeO 2 is smaller than that without adding CeO 2 .A suitable CeO 2 (e.g., 15%) content can effectively inhibit grain growth and, hence, provide a better dispersion of Ni with the smallest particle size (Figure 5).However, at a higher 20% CeO 2 content, the aggregation of Ni appears due to excessive CeO 2 accumulation on the pore/surface of the catalyst.Therefore, the catalyst with a 15% CeO 2 addition is more suitable as the catalyst for the HDO of bio-oil.
Catalyst
Ni-Cu/HZSM-5 Ni-Cu/HZSM -5 Ni-Cu/HZSM-5 Ni-Cu/HZSM- The catalyst image before an HDO reaction was captured by TEM (see Figure 5a-d for CeO2 contents of 0, 5, 15, and 20%, respectively).The average size of the catalyst with CeO2 is smaller than that without adding CeO2.A suitable CeO2 (e.g., 15%) content can effectively inhibit grain growth and, hence, provide a better dispersion of Ni with the smallest particle size (Figure 5).However, at a higher 20% CeO2 content, the aggregation of Ni appears due to excessive CeO2 accumulation on the pore/surface of the catalyst.Therefore, the catalyst with a 15% CeO2 addition is more suitable as the catalyst for the HDO of bio-oil.
N2 adsorption-desorption
The surface area and pore volume measured by nitrogen adsorption-desorption for the different catalysts before and after the 1 h HDO reaction at 270 °C are shown in Table 5.As can be seen from Figure 5, with the increase of the CeO2 content, the dispersion of Ni increased; furthermore, nickel became smaller particles attached to the catalyst support with a decreased specific surface area (Table 5).Clearly, the specific surface area and pore volume of the catalyst were reduced with the addition of CeO2, but the activity of the catalyst was improved (see yield in Figure 1).After the HDO reaction, carbon was deposited on the catalyst, resulting in the decreased surface area and pore volume of the catalyst.For Ni-Cu/HZSM-5, 5%CeO2-Ni-Cu/HZSM-5, 15%CeO2-Ni-Cu/HZSM-5, 20%CeO2-Ni-Cu/HZSM-5, the differences in the surface area before and after the reaction were 188, 113, 100, and 105 m 2 /g, respectively.The advantage of using CeO2 is reflected by the reduction of the specific surface area, e.g., a 54% reduction after HDO with Ni-Cu/HZSM-5, whereas only a 31% reduction is observed in 15%CeO2-Ni-Cu/HZSM-5.
N 2 adsorption-desorption
The surface area and pore volume measured by nitrogen adsorption-desorption for the different catalysts before and after the 1 h HDO reaction at 270 • C are shown in Table 5.As can be seen from Figure 5, with the increase of the CeO 2 content, the dispersion of Ni increased; furthermore, nickel became smaller particles attached to the catalyst support with a decreased specific surface area (Table 5).Clearly, the specific surface area and pore volume of the catalyst were reduced with the addition of CeO 2 , but the activity of the catalyst was improved (see yield in Figure 1).After the HDO reaction, carbon was deposited on the catalyst, resulting in the decreased surface area and pore volume of the catalyst.For Ni-Cu/HZSM-5, 5%CeO 2 -Ni-Cu/HZSM-5, 15%CeO 2 -Ni-Cu/HZSM-5, 20%CeO 2 -Ni-Cu/HZSM-5, the differences in the surface area before and after the reaction were 188, 113, 100, and 105 m 2 /g, respectively.The advantage of using CeO 2 is reflected by the reduction of the specific surface area, e.g., a 54% reduction after HDO with Ni-Cu/HZSM-5, whereas only a 31% reduction is observed in 15%CeO 2 -Ni-Cu/HZSM-5.
NH 3 -TPD
The acidity of the surface of the catalyst can be effectively measured by NH 3 -TPD.Figure 6 presents the NH 3 -TPD curves of the catalysts with different CeO 2 contents, in which Figure 6a-d correspond to CeO 2 contents of 15, 5, 20, and 0%, respectively.The four catalysts with different CeO 2 contents have the same peaks at 225 and 430 • C that correspond to the Bronsted and Lewis acid positions [19], respectively.The extent of these two peaks before the reaction is shown in Table 6.The total acidic sites of the CeO2 catalyst decreased significantly with the addition of CeO2.However, the Bronsted acid sites increased first and then decreased at 20% CeO2.At a 15% CeO2 addition, the ratio of Bronsted acid to total acids sites was the largest (51%) as compared to only 21% for Ni-Cu/HZSM-5.The increase in ratio of Bronsted acid to total acids may enhance the cracking ability of large molecules [20].H2-TPR can effectively determine the reducing capacity of the catalyst.Figure 7 shows the H2-TPR profiles of the catalyst before the HDO reaction.All the samples have no peak at the high temperature (>600 °C); this temperature corresponds to the reduction peak of nickel spinel [21], The total acidic sites of the CeO 2 catalyst decreased significantly with the addition of CeO 2 .However, the Bronsted acid sites increased first and then decreased at 20% CeO 2 .At a 15% CeO 2 addition, the ratio of Bronsted acid to total acids sites was the largest (51%) as compared to only 21% for Ni-Cu/HZSM-5.The increase in ratio of Bronsted acid to total acids may enhance the cracking ability of large molecules [20].7 shows the H 2 -TPR profiles of the catalyst before the HDO reaction.All the samples have no peak at the high temperature (>600 • C); this temperature corresponds to the reduction peak of nickel spinel [21], indicating no nickel spinel was produced in the experimental catalysts.Figure 4 of the XRD plots indicates the absence of the spinel phase for Ni.For example, NiO is shown in 2θ of 37.8, 44.7, and 62.3 (PDF# 78-0429) and NiAl 2 O 4 in 2θ of 37.8, 42.3, and 68.4 (PDF# 78-0552) as reported by Shih and Leckie [22]-these 2θ data are not shown in the Figure 4 XRD spectra, albeit 44.7 • indicates a slight peak.The slight difference in TPR profiles (with respect to peak pattern and corresponding temperatures) can be ascribed to the variation in Ni dispersion and active metal-HZSM-5 interaction.Compared with the reduction peaks of the catalyst with a 20% CeO 2 content, the catalyst with a 15% CeO 2 content exhibited a sharper reduction peak, indicating the presence of more uniform and highly dispersed metal particles [23].Therefore, different CeO 2 contents have slight effects on the reduction temperature of the catalyst.In particular, the reduction temperature reached the lowest at the catalyst with a 15% CeO 2 content.Usually, the rapid deactivation of the catalyst is caused by the carbon deposition at a high HDO temperature.A low reduction indicates a low-temperature HDO process, hence, the less degree of carbon deposition on the catalysts during HDO.The catalyst with 15% CeO 2 can promote the hydrogenation process at low temperatures, and the degree of carbon deposition on the catalysts can be effectively slacked during HDO.
Appl.Sci.2019, 9, x FOR PEER REVIEW 11 of 16 slight effects on the reduction temperature of the catalyst.In particular, the reduction temperature reached the lowest at the catalyst with a 15% CeO2 content.Usually, the rapid deactivation of the catalyst is caused by the carbon deposition at a high HDO temperature.A low reduction indicates a low-temperature HDO process, hence, the less degree of carbon deposition on the catalysts during HDO.The catalyst with 15% CeO2 can promote the hydrogenation process at low temperatures, and the degree of carbon deposition on the catalysts can be effectively slacked during HDO.
XPS
XPS was used to characterize the electronic state and distribution of elements on the surface of the catalysts.By comparing the XPS spectra of the 15%CeO2-Ni-Cu/HZSM-5 catalyst before and after the reaction, the change in the valence state of cerium before and after the reaction can be analyzed.Furthermore, by comparing the XPS spectra of different catalysts under the same reaction conditions, the morphology and distribution of the carbon deposited on the surface of the catalyst can be effectively analyzed.
XPS
XPS was used to characterize the electronic state and distribution of elements on the surface of the catalysts.By comparing the XPS spectra of the 15%CeO 2 -Ni-Cu/HZSM-5 catalyst before and after the reaction, the change in the valence state of cerium before and after the reaction can be analyzed.Furthermore, by comparing the XPS spectra of different catalysts under the same reaction conditions, the morphology and distribution of the carbon deposited on the surface of the catalyst can be effectively analyzed.
The binding energies near the peaks shown in Figures 8 and 9 were directly retrieved from the instrument.The XPS analyses for the catalyst before and after HDO exhibited peaks in the vicinity of 884.5 and 905.4 eV (Figure 8a), which corresponded to the electron bonding energy of Ce3d 5/2 and Ce3d 3/2 , respectively [24].The electronic binding energy of 920.6 eV (Figure 8a) belongs to the position peak of Ce 4+ [25].Figure 8b shows that the binding energy of Ce in the catalyst after the reaction exhibits approximately more than 2.5 eV of chemical shift of the Ce3d peaks, indicating a change in the valence state of Ce in the strong reduction atmosphere.According to the standard XPS spectra of Ce, the peak of Ce3d 5/2 (887.9 eV) in the catalyst before the reaction is attributable to CeO 2 , whereas the peak of Ce3d 5/2 after the reaction (885.1 eV) is attributable to the low-valence Ce, indicating that the Ce 4+ content of the catalyst is reduced to Ce 3+ during the reaction.In addition, part of the Ce 4+ in CeO 2 is reduced to Ce 3+ , oxygen vacancies are produced, and free electrons and lattice oxygen are released.The released electrons can be transferred to the active site of Ni 0 , thereby adjusting the electronic state of Ni atom on the surface, improving the interaction between the carrier and the active component resulting in better dispersion of Ni [26,27], and thus improving the catalyst's redox ability.The release of lattice oxygen is also beneficial in transferring active carbon species on the catalyst surface, thus promoting coke removal [27,28].3.2.6.XPS XPS was used to characterize the electronic state and distribution of elements on the surface of the catalysts.By comparing the XPS spectra of the 15%CeO2-Ni-Cu/HZSM-5 catalyst before and after the reaction, the change in the valence state of cerium before and after the reaction can be analyzed.Furthermore, by comparing the XPS spectra of different catalysts under the same reaction conditions, the morphology and distribution of the carbon deposited on the surface of the catalyst can be effectively analyzed.The binding energies near the peaks shown in Figures 8 and 9 were directly retrieved from the instrument.The XPS analyses for the catalyst before and after HDO exhibited peaks in the vicinity of 884.5 and 905.4 eV (Figure 8a), which corresponded to the electron bonding energy of Ce3d5/2 and Ce3d3/2, respectively [24].The electronic binding energy of 920.6 eV (Figure 8a) belongs to the position peak of Ce 4+ [25].Figure 8b shows that the binding energy of Ce in the catalyst after the reaction exhibits approximately more than 2.5 eV of chemical shift of the Ce3d peaks, indicating a change in the valence state of Ce in the strong reduction atmosphere.According to the standard XPS spectra of Ce, the peak of Ce3d5/2 (887.9 eV) in the catalyst before the reaction is attributable to CeO2, whereas the peak of Ce3d5/2 after the reaction (885.1 eV) is attributable to the low-valence Ce, indicating that the Ce 4+ content of the catalyst is reduced to Ce 3+ during the reaction.In addition, part of the Ce 4+ in CeO2 is reduced to Ce 3+ , oxygen vacancies are produced, and free electrons and lattice oxygen are released.The released electrons can be transferred to the active site of Ni 0 , thereby adjusting the electronic state of Ni atom on the surface, improving the interaction between the carrier and the active component resulting in better dispersion of Ni [26,27], and thus improving the catalyst's redox ability.The release of lattice oxygen is also beneficial in transferring active carbon species on the catalyst surface, thus promoting coke removal [27,28].Figure 9 shows the Ni 2p XPS spectra of the used catalyst after the reaction at 270 • C for 1 h.The electron binding energies of Ni 2p in the Ni-based catalyst were approximately 856.5, 862, and 870 eV, corresponding to the Ni-O bond, the shakeup peaks, and the Ni 0 of the reduced state, respectively [29].As seen from Figure 9, the peaks (865-869 eV) near 870 eV (Ni 0 of the reduced state) is the lowest in the catalyst without adding CeO 2 , and correspondingly, the 856 eV peak (Ni-O) was relatively the highest.Thus, a small part of Ni 0 was translated into the Ni-O bond with the CeO 2 added catalysts resulting in a small amount of nickel oxidized, eventually leading to enhanced hydrogenolysis.In other words, the virgin catalyst limits the HDO reaction, and the catalyst becomes more conducive to carbon deposition [30].The Ni 0 content in the catalyst with CeO 2 addition is higher than that without CeO 2 , and a slightly negative shift in the electron bonding energy is observed, which proves that the addition of CeO 2 can increase the electron density of Ni 0 , increase the electronegativity of Ni, and improve the reduction ability of Ni [30].Moreover, the Ni 0 content and the electron binding energy of the catalysts with different CeO 2 contents were also significantly different.In short, at a 15% CeO 2 content, the Ni 0 content was the largest, implying that the carbon deposition was the lowest and the carbon resistance was the strongest.
TG
A TG analysis is an effective method for obtaining carbon activities and regeneration levels.Figure 10 shows the weight loss of different catalysts after a reaction under the same condition, in which a-d correspond to the 15%CeO 2 -Ni-Cu/HZSM-5, 5%CeO 2 -Ni-Cu/HZSM-5, 20%CeO 2 -Ni-Cu/HZSM-5, and Ni-Cu/HZSM-5 catalysts, respectively.The weight losses were 14, 23, 29, and 41% at 270 • C after a reaction time of 1 h, indicating that the amount of coke deposition is significantly lower with CeO 2 addition than that without CeO 2 addition.It means that the catalysts added with CeO 2 can exhibit a better carbon deposition resistance.Again, with a 15% CeO 2 addition, the weight loss is the least (14%) or the coke deposition is the least, agreeing with the previous assessments that the 15% CeO 2 is the optimal amount for the HDO reaction of bio-oil.This finding suggests that a catalyst with a proper amount of CeO 2 can improve the carbon resistance of the catalyst, but if excessive CeO 2 is added, the carbon resistance of the catalyst is reduced.An excessive amount of CeO 2 may cover the active site of a part of the Ni, thus reducing the chance of contact between the reactants and the active site, resulting in a large amount of carbon accumulation.
Figure 11 of the DSC profiles (heat loss mW/mg) illustrates the type and distribution of the carbon deposits from the heat release curves of different catalysts under the same reaction conditions.Coke decomposition can be divided into four stages, namely, 100-250, 250-450 450-600, and 600-750 • C, which correspond to the decomposition temperatures of the small-molecule component results, soft coke, hard coke, graphite and graphite-like carbon, respectively [2,31].The catalyst without CeO 2 addition appears with an exothermic peak at 380 • C. The exothermic peaks of the added CeO 2 catalysts apparently shifted to the left, or with 5%, 15%, and 20% CeO 2 , the peaks were 340, 310, and 360 • C, respectively, which prove that the main deposit on the catalyst is a soft carbon (soluble in organic solvent).The peak temperature reached the lowest (310 • C) when the CeO 2 load was 15% because the soluble carbon deposited at this time has a relatively small molecular weight derivative (i.e., low boiling point and high solubility).The results indicate that the addition of CeO 2 may lead to a substantial improvement in the resistance of the Ni-Cu/HZSM-5 catalyst against carbon deposition, and the different CeO 2 contents have different effects on the catalyst's resistance to carbon accumulation.In other words, when the CeO 2 load is 15%, the ability of the catalyst to resist carbon may be the strongest.
CeO2 load was 15% because the soluble carbon deposited at this time has a relatively small molecular weight derivative (i.e., low boiling point and high solubility).The results indicate that the addition of CeO2 may lead to a substantial improvement in the resistance of the Ni-Cu/HZSM-5 catalyst against carbon deposition, and the different CeO2 contents have different effects on the catalyst's resistance to carbon accumulation.In other words, when the CeO2 load is 15%, the ability of the catalyst to resist carbon may be the strongest.
Conclusions
This study on the addition of different CeO2 contents to the Ni-Cu/HZSM-5 catalyst used for the HDO of bio-oil derived the following conclusions: 1.The effect of bio-oil HDO can be improved by adding CeO2.The upgrading oil yield can reach a maximum of 47.6% when the CeO2 content is 15%.From the energy utilization perspective, 270 °C is the suitable temperature for HDO when the CeO2 content is 15% of the Ni-Cu/HZSM-5 catalyst.
2. The addition of CeO2 can improve the Ni dispersion and redox ability, can increase the Bronsted acidity ratio, and can decrease the particle size of the catalyst.When the CeO2 content is 15%, the performance of the catalyst is at the optimum, the particle size of active metal is the smallest, the dispersibility of active metal is the best, the ratio of Bronsted acids to total acids is the largest, and the reduction temperature of H2-TPR is the lowest.In addition, the reduction of the specific surface area and the pore loss of the catalyst after the reaction is the smallest compared with that of the catalyst before reaction.
3. The addition of CeO2 can improve the coke deposition of deactivated catalysts after HDO.The catalyst carbon deposit resistance is at the optimum when the CeO2 content is 15%.Compared with the catalyst without CeO2, the coke deposition decreases from 41 to 14 wt%.At this CeO2 content, the temperature of the exothermic peak of the coke combustion is also the lowest, therefore proving that the soft coke formed at this time is a derivative with a relatively small molecular weight.
Conclusions
This study on the addition of different CeO 2 contents to the Ni-Cu/HZSM-5 catalyst used for the HDO of bio-oil derived the following conclusions: 1.
The effect of bio-oil HDO can be improved by adding CeO 2 .The upgrading oil yield can reach a maximum of 47.6% when the CeO 2 content is 15%.From the energy utilization perspective, 270 • C is the suitable temperature for HDO when the CeO 2 content is 15% of the Ni-Cu/HZSM-5 catalyst.2.
The addition of CeO 2 can improve the Ni dispersion and redox ability, can increase the Bronsted acidity ratio, and can decrease the particle size of the catalyst.When the CeO 2 content is 15%, the performance of the catalyst is at the optimum, the particle size of active metal is the smallest, the dispersibility of active metal is the best, the ratio of Bronsted acids to total acids is the largest, and the reduction temperature of H 2 -TPR is the lowest.In addition, the reduction of the specific surface area and the pore loss of the catalyst after the reaction is the smallest compared with that of the catalyst before reaction.
3.
The addition of CeO 2 can improve the coke deposition of deactivated catalysts after HDO.The catalyst carbon deposit resistance is at the optimum when the CeO 2 content is 15%.Compared with the catalyst without CeO 2 , the coke deposition decreases from 41 to 14 wt%.At this CeO 2 content, the temperature of the exothermic peak of the coke combustion is also the lowest, therefore proving that the soft coke formed at this time is a derivative with a relatively small molecular weight.
Figure 1 .
Figure 1.The yield (Y) with different catalysts at 270 °C for 1 h.
Table 4 .
The crystal particle size of the catalyst.
Figure 9 .
Figure 9.The Ni 2p XPS spectra of the deactivation catalyst after the reaction at 270 °C for 1 h.
Figure 9
Figure9shows the Ni 2p XPS spectra of the used catalyst after the reaction at 270 °C for 1 h.The electron binding energies of Ni 2p in the Ni-based catalyst were approximately 856.5, 862, and 870 eV, corresponding to the Ni-O bond, the shakeup peaks, and the Ni 0 of the reduced state, respectively[29].As seen from Figure9, the peaks (865-869 eV) near 870 eV (Ni 0 of the reduced state)
Figure 9 .
Figure 9.The Ni 2p XPS spectra of the deactivation catalyst after the reaction at 270 • C for 1 h.
Table 1 .
The properties of bio-oil and upgrading oil with different catalysts at 270 • C for 1 h.
Table 3 .
The chemical composition of bio-oil and upgrading oils.
Table 3 .
The chemical composition of bio-oil and upgrading oils.
Table 4 .
The crystal particle size of the catalyst.
Table 5 .
The specific surface area and pore volume of different catalysts before and after the reaction.
Table 5 .
The specific surface area and pore volume of different catalysts before and after the reaction.
Table 6 .
The peak areas for the catalysts with different CeO2 contents.
Table 6 .
The peak areas for the catalysts with different CeO 2 contents.
2 -TPR can effectively determine the reducing capacity of the catalyst.Figure | 2019-04-09T04:10:50.470Z | 2019-03-26T00:00:00.000 | {
"year": 2019,
"sha1": "7202b664f36bf45c1d6bacdf58a0dee181384bfd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/9/6/1257/pdf?version=1553574205",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7202b664f36bf45c1d6bacdf58a0dee181384bfd",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
268728539 | pes2o/s2orc | v3-fos-license | Social contact as a strategy to reduce stigma in low- and middle-income countries: A systematic review and expert perspectives
Social contact (SC) has been identified as a promising strategy for stigma reduction. Different types of SC exist. Various scholars defined positive factors to strengthen SC. This study aims to investigate the application and effectiveness of SC as a strategy to reduce stigmatisation across stigmas, settings and populations in low- and middle-income countries (LMICs). We specifically examine the use of positive factors. A systematic review was conducted in twelve electronic databases using key terms related to stigma AND social contact AND intervention AND LMICs. Data were synthesised narratively. Study quality was assessed with the Joanna Briggs Institute critical appraisal checklists. Additionally, semi-structured interviews were used with first/corresponding authors of included publications to investigate their practical experiences with SC. Forty-four studies (55 publications) were identified. Various stigmas (n = 16) were targeted, including mental health (43%). Indirect (n = 18) and direct contact (n = 16) were used most frequently, followed by collaboration, imagined and vicarious contact, or a combination. The most applied additional strategy was education. Almost half of the studies, explicitly or implicitly, described positive factors for SC, such as PWLE training or disconfirming stereotypes. The majority suggested that SC is effective in reducing stigma, although inconsistent reporting overshadows conclusions. Perspectives of people with lived experience (PWLE) were infrequently included. Expert perspectives stressed the importance of contextualisation, PWLE participation, and evaluation of SC. This study provides an overview of SC as a stigma reduction strategy within LMICs. Conclusions about which type of SC is more effective or whether SC is more effective for a specific stigma category cannot be drawn. We recommend future research to strengthen reporting on effectiveness as well as PWLE perspective and SC processes, and to further critically examine the potential of SC. An overview of positive factors applied to strengthen SC is provided, which can stimulate reflection and guide future SC.
Introduction
Stigma is well-known to have a profound negative impact on health and quality of life, and the construct has been of interest to various scholars since Goffman's seminal work [1].Stigma was further conceptualised as a phenomenon rooted in social interaction, defined as "the cooccurrence of (. ..) labelling, stereotyping, separation, status loss, and discrimination" and specified that "for stigmatization to occur, power must be exercised" [2].
Stigmatisation limits access to services (including, but not limited to, health) and engagement in care [3], poses a barrier to help-seeking behaviours [4], negatively influences social relationships and participation [5], reduces the opportunities of individuals [1] including access to resources [5], and in general contributes to health inequity and social inequalities [5,6].Stigma could cause more harm than the burden of the condition itself [7][8][9][10].At the intersection of multiple stigmas, the (health) impact could be compounded [11,12].
The detrimental burden of stigmatisation on population health demands action.Recent reviews investigated the state of the art of stigma reduction interventions [7,9,[13][14][15][16].Compared to high-income countries (HIC), development and evaluations of anti-stigma programs is limited in low-and middle-income countries (LMICs) [17,18].As interventions are contextdependent, those originating from HIC cannot be automatically transferred to LMICs [19].
Social contact (SC) has been identified as a promising strategy for stigma reduction [7,18,20] which we operationalise as intentional interaction between people with lived experience of a certain (stigmatised) condition (PWLE) and people without that specific condition [21][22][23][24].Different SC types exist, such as direct, indirect and imagined contact.The rationale of direct face-to-face SC originated from the perspective that contact between majority and minority groups could reduce prejudice [25,26].In situations where direct contact is less applicable due to e.g.presence of high prejudice [27] or access restrictions [28], SC types such as indirect (i.e.non-face-to-face contact such as video testimonials or radio diaries [27]), imagined (i.e.imagining positive interaction [29]), vicarious (i.e.observing in-group members having successful cross-group contact [30]), and extended (i.e.knowing that in-group members have cross-group friends [31]) SC approaches have also been increasingly and successfully applied [27,28].Moreover, these SC types are often used to reach out to large audiences as they are easy to spread and easy to scale up, such as in (large) campaigns [32,33].
The application of SC as a stigma reduction strategy comes with a few knowledge gaps.First, recent systematic reviews or frameworks on SC as a stigma reduction strategy focused on mental health stigma and indirect SC only [28,34,35], although SC has been employed to reduce physical health stigma e.g.HIV/AIDS [32,36] and not health-related stigma concerning age [37] or the experience of sexual violence [38].Recent research advocates to learn about stigma (reduction) across stigmas [12,39].Second, although several scholars have investigated which (combination of) positive factors-referred to as "optimal conditions" [21,25,26] or key ingredients [40]-are required for SC to be more effective and least harmful to reduce prejudice and stigmatisation, researchers have indicated more knowledge is required to improve SC in practice [20,36,41].Third, recent research highlighted that the evidence-base of SC is contested, for example through biased reporting and lacking methodological rigor [23].Additionally, there are several criticisms, such as that SC may enhance rather than reduce stigmatisation [24,36,42], or that positive testimonies of PWLE might not be believed and therefore increase stereotypes [43].The above gaps trigger a more thorough look into the application and effectiveness of SC.
Against this background, to contribute to the knowledgebase, this study investigates SC as a strategy to reduce stigmatisation across health-related and not health-related stigmas, populations and settings in LMICs.The main aim of the systematic review was to identify contactbased stigma reduction interventions used in LMICs, across stigmas and populations, and assess their content and effectiveness.
To support the assessment in content and effectiveness, we additionally aimed to: a. Examine whether, and if so which, known or new factors to strengthen SC have been applied; and b.Explore which lessons were drawn to improve SC These questions were answered by the review and complemented with expert perspectives.
To support the use of this review and stimulate reflection on and guide future implementation of SC, we have summarised these findings and recommended future research directions to improve SC.
Materials and methods
A systematic review (part 1) and an additional exploration of expert perspectives (part 2) were conducted.
intervention in which SC is used as a strategy was understood as any form of created contact where PWLE and people without that stigma experience interacted together, through any form of SC.Stigma reduction was understood as a change in stigmatising practices or experiences, which might be reported in different ways such as increased warmth/empathy or reduced social distance.Studies were excluded when SC was not explicitly initiated as part of a stigma reduction intervention, such as general social media exposure or existing interactions.Studies were also excluded in case of two-way prejudice, implying that there was no strict power imbalance and thereby did not meet the stigma definition used [2].A list of all inclusion/exclusion criteria is provided in S1 Text.
Data extraction and quality assessment.Data from the included studies were extracted in Excel and included general information about the study (e.g.publication year, author name, country), study methods, participant characteristics, type of intervention (including SC) and its content, information regarding positive factors, and effectiveness.The development of the extraction sheet was informed by previous work [14], after which it was pilot-tested and adjusted where necessary.Data extraction was independently conducted by two researchers (first and last author).Inter-rater reliability scores were high (85%, 90%, 96%; for 3 studies), thus the remaining studies were divided between the two researchers (first and last author).All studies were cross-checked by the first of last author for accuracy to increase internal validity.Discrepancies were resolved through discussion.
The quality of included studies was assessed by two researchers (CD, KH): both checked 15% independently.As discrepancies were minimal, the remaining studies were divided among both researchers.Arising questions were discussed and resolved.Joanna Briggs Institute (JBI) critical appraisal checklists were used that were specific to the research methodology [47].In case a study used mixed methods related to stigma outcomes, both a quantitative and qualitative checklist was used.High quality was defined when �85% of relevant questions of the JBI checklist could be answered with a "yes", moderate quality when this was the case for 40%-<85% of relevant questions, and low quality when this was the case for <40%.
Data synthesis.As both quantitative and qualitative studies were included, JBI's convergent integrated approach for mixed methods reviews was used [48].Narrative synthesis was conducted.Subsequent inductive qualitative analysis of extracted data on positive factors and lessons learnt was conducted, and themes and categories were refined through discussion.Specific attention was given to stigma; age cohort; location; and effectiveness per social contact type, supported by visualisations.
Part 2: Expert perspectives
To provide additional insight into the application of SC as a stigma reduction strategy, expert perspectives were explored through interviews.The aim was to enrich the systematic review with additional insights into the application of SC as a stigma reduction strategy, such as choices around the type of SC and positive factors for SC.Reporting of this qualitative research followed Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines [49].
Ethics.A protocol was developed a priori.Ethical approval was requested from the Research Ethics Committee Arnhem-Nijmegen (registration number: 2022-13787); the committee judged that ethical approval was not required under Dutch National Law.Complete voluntary participation was stressed in the written information and repeated at the beginning of each interview.Each respondent provided informed consent.
Study design.Semi-structured individual interviews were conducted with first/corresponding authors of studies included in the review.All interviews took place on-line in May and June 2022.
Recruitment strategy and respondents.The qualitative study was announced in the same e-mail in which first/corresponding authors of included studies were asked for additional relevant studies.This announcement was followed by an official invitation to participate.All respondents received written information about the project.Purposive sampling was used to ensure diversity, with several considerations guiding the recruitment: 1) approach the authors of most recent publications first, 2) variety of SC types, 3) variety of stigmas and contexts, 4) studies provided a rationale to apply SC, and 5) SC was the main stigma reduction strategy.Respondents needed to speak English or Dutch.In case of no response after contacting twice, first/corresponding authors, who had not been approached as they did not meet all abovementioned considerations, were contacted.We aimed to interview at least 10% of first/corresponding authors of included studies in the review.This 10% was based on a practical reason of available time; it was expected not to counteract the explorative purpose of the interviews.
Data collection.A semi-structured topic guide was created and pilot-tested (see S2 Text).During the interview, experiences of stigma reduction researchers/practitioners concerning use of SC were collected.Each interview started with introducing the researcher, explaining the goal of the interview, and re-confirming informed consent.Interviews were held by a trained interviewer (first author) through a video call with Microsoft Teams.The interviewer had no (work) relation with any of the participants.All interviews were recorded and transcribed a verbatim in the language spoken during the interview.Anonymity of the participants was maintained during analysis.Data were stored in a password-protected secure database.
Data analysis.Transcripts were analysed using thematic analysis, a qualitative research method "for identifying, analysing and reporting patterns within data" [50].Qualitative software programme NVivo 12 software was used.The first transcript was independently coded by two researchers (first and last author) to minimise subjectivity.The findings were discussed, and discrepancies debated until consensus was reached.As discrepancies were minimal, the remaining transcripts were coded by the first author and checked by the last author.
Systematic review
Study selection.The first search identified 2686 records, of which 889 were duplicates.The second search identified 276 records with 60 duplicates.Through title/abstract screening of the 2013 remaining records, 1739 were considered irrelevant.After full-text screening, 244 records were excluded.Additional search strategies identified 25 eligible records.Eight unique studies had two or more publications.This resulted in a total of 44 main studies with 55 underlying publications included in this review.We described the results based on the 44 main studies and, in case of multiple publications, supplemented information, when necessary, from the corresponding publications.General study characteristics.Key characteristics for each study are provided in Table 1.Publications run from 2003 to 2022.Studies took place in 19 countries covering all WHO regions.Most studies were conducted in the South-East Asian region (n = 14, 32%), the European (n = 10, 23%), African (n = 9, 23%) and Western Pacific (n = 7, 16%) regions.The Eastern Mediterranean and Americas regions were underrepresented with two studies (5%) each.Studies were randomised controlled trials (RCTs) (n = 16, 36%), quasi-experimental (n = 15, 34%) or non-comparison studies (n = 13, 31%).
General intervention characteristics.The 44 included studies had together a total of 84 study arms.In total, 53 study arms included SC, as nine control arms included a comparison stigma reduction intervention with SC.The other arms consisted of 15 control arms without an intervention, 11 with a comparison stigma reduction intervention without SC, and 5 with an intervention irrelevant to stigma reduction.Of the fifty-three SC interventions, 18 (34%) employed indirect and 16 (30%) direct SC, 8 (15%) used a combination of SC approaches, 7 (13%) employed SC through collaborative activities, and 4 (8%) employed imagined or vicarious SC.When indirect SC was applied in a study arm, 57% (n = 12) used video, 14% (n = 3) used radio, 10% (each n = 2) used reading comic books or reading a story, and 5% (each n = 1) used participatory theatre or Photo Voice.Details about each SC intervention can be found in Table 1.
Settings and target populations.Studies were mostly conducted in one setting, typically in higher education (n = 14, 32%), at community spots (n = 11, 25%), or health settings (n = 8, 18%).Four studies (9%) were conducted in more than one setting (see Fig 3).Young adults were the target population of one-third of the studies (n = 15, 34%), and young adults together with adults were targeted in one-fourth (n = 12, 27%).Adults alone were the target group in ten studies (23%).Children together with young adults, and children alone were targeted in two studies each (5%), while one study (2%) targeted a combination of children, young adults and adults.Two studies (5%) did not report on age (see Fig 4).
Stigma measures and measurements.Different stigma-related measures were used.Most studies (n = 41, 93%) measured stigma quantitatively using a range of stigma scales.Nine of these studies (22%) complemented quantitative with qualitative measures, while few studies (n = 3, 7%) applied qualitative methods only to assess stigma, using open-ended questionnaires, individual interviews and/or focus group discussions, reporting on any changes experienced after the SC intervention.Of the 44 studies, the majority (n = 36, 82%) measured stigma-related outcomes before and after the intervention.Of the 44 studies, eighteen studies (41%) performed single (n = 11, 25%) or multiple (n = 7, 16%) follow-up measurements, 4 The effects were reported as follows: S = ~properly reported + significant; NS = ~properly reported + not significant; S* = incompletely reported + significant; NS* = incompletely reported + not significant.(T) = main effect of Time; (G) = main effect of Group; (I) = Interaction effect.If reported, effect sizes were given.d of 0.2 is considered as small, 0.5 as medium, and 0.8 as large (Cohen's d).partial η 2 of 0.01 is considered as small, 0.06 as medium, and 0.14 as large (Cohen's F).R2 of 0.02 is considered as small, 0.13 as medium, and 0.26 as large (F 2 rules of thumb). 5See S3 Text for more details on the quality appraisal.In case of other publications next to the main publications, quality is given in brackets.When a study used mixed methods, the quality of the quantitative as well as the qualitative part (marked with a *) is given. https://doi.org/10.1371/journal.pgph.0003053.t001 which took place between 1 week to 22 months after the intervention.Of these, five (28%) concluded the last measurement within one month after study end and eleven (61%) after 6 months and beyond.Five studies (11%) measured changes-e.g.stigma or self-esteem-with PWLE.
SC and intervention duration.Overall intervention periods ranged from 3 minutes for an imagined contact intervention to 3 years for an intervention with collaborative activities.Most interventions (n = 20, 45%) lasted less than one day, nine (20%) between one day and one month, nine (20%) between one month and one year, and three (7%) interventions took place for one year or more.Three studies did not report on intervention period.Direct, indirect, and imagined SC varied between 45 minutes to 65 hours, 5 minutes to 90 minutes, 3 minutes to 1 hour, respectively.One vicarious contact intervention had 6 sessions with 40 minutes of contact.Contact time within the studies employing collaborative activities (n = 7, 17%) was not computable.
Choices for SC type.Almost half of the studies (n = 18, 41%) provided an explanation for their choice of SC type beyond the rationale for SC.About half of these studies (n = 8, 44%) employed indirect SC.We divided the choices into two categories: practical and contextual/ cultural.As reasons of practicality, accessibility of the intervention or feasibility of use was mentioned most (n = 8, 44%), followed by financial resources (n = 7, 39%), potential for reach (n = 6, 33%) and the daily reality which may hamper contact in real life (n = 3, 17%).Time was mentioned once.Contextually, the cultural sensitivity around the stigma, such as its illegality, was considered most (n = 4, 22%), followed by the fit of the SC type with the population (n = 3, 17%).
Intervention cultural adaptation.Two-third of the studies (n = 29, 66%) reported on (partial) cultural fit of the intervention.Of these studies, 86% (n = 25) mentioned the intervention at least partially originated from the local setting, nine (31%) referred to using local customs such as listening to the radio together in the comfort of someone's home, five (17%) indicated that the intervention was pre-tested or piloted or that local beliefs were taken into account such as considering context-related myths, and one (3%) indicated adaptation consisted of translation.In general, the studies reported minimally on the details of cultural adaptation.The type of adaptation per study can be found in Table 1.
Positive factors for SC.One-fifth (n = 9, 20%) of the studies explicitly described the positive factors they considered when developing and/or implementing SC, and one-third (n = 16, 36%) did so implicitly.Overall, the studies concerning direct SC interventions or interventions combining two or more SC types applied, explicitly or implicitly, the most factors as, respectively, ten (71%) and three studies (60%) applied one or more factors.In imagined contact (n = 2) no positive factors were mentioned.Of the studies integrating positive factors, the most employed factor was the creation of an interactive session (n = 9, 36%).This was followed by training of the resource person for the contact role, PWLE (moderately) disconfirming the stereotype and two of Allport's conditions, namely equal status and support by authorities (each n = 7, 28%).Five contact interventions indicated to embed the contact in the context, to include education and to create perspective and empathy before facilitating contact, the creation of a friendly environment, the focus on recovery (all n = 5, 20%) and ensuring that PWLE resource persons are similar to the audience (each n = 4, 18%).
Lessons and recommendations to improve SC.About half of the studies (n = 24, 54%) shared learnings or recommendations regarding the application of SC.The positive factor mentioned most in these studies (n = 9, 38%) was to create multiple contact moments.Other highlighted recommendations were the training of resource persons (n = 6, 27%), support for behaviour change of the participants, focus on recovery, acknowledgement of potential risks for PWLE participation and the follow-up on expected moments and challenges during the contact process (each n = 4, 18%).Factors mentioned in three studies (14%) were ensuring high levels of intimacy, creating positive experiences, strengthening support from family and friends, and recognising the demand the contact role puts on the PWLE.One study mentioned that, despite encouragement, interaction between PWLE and the target group was limited.
In Table 2, we summarised the factors applied, and lessons learnt in order to strengthen SC, mentioned by the included studies.
Effectiveness of SC interventions.Of all studies reporting on stigma reduction quantitatively (n = 41-excluding one study reporting in percentages only), almost all studies (n = 38, 95%) reported statistically significant (main) time, (main) group or interaction effects on at least one stigma measure.However, reporting was often incomplete and the performed statistical analysis was often inadequate (see explanation below), which means that no conclusive interpretations about effectiveness could be made.Effect sizes were reported in 14 studies (35%), indicating negligible to small effects across studies.Table 1 includes details concerning reported outcomes per study.
In the measurement of main effect of time, nine studies (22%) did not report on this.Of the studies that did report on this (n = 32, 78%), two-thirds (n = 20, 63%) of the quantitative studies reported statistical significance; another eleven comparative studies (34%) reported invalid statistical significance as they did not compare the main intervention with the control arm(s).One study (9%) reported no statistically significant effect of time.The studies reporting a significant main effect of time included sixteen (84%) targeting mental health stigma, eight (73%) physical health stigma, seven (70%) not health-related stigmas and none (0%) multiple stigmas.While all five contact combination interventions and collaborative intervention studies reported a significant main effect of time, nine (64%) of the indirect contact interventions (n = 14) and none of the imagined or vicarious (n = 3) did.Of the comparative studies that could measure main effect of group (n = 31), more than half of the studies (n = 19, 61%) did not report on this.Of the studies that reported on main effect of group (n = 12, 39%), five (41%) reported statistically significant effects, three (25%) reported invalid statistically significant effects as they did not compare the main intervention with the control arm(s), and four (33%) reported no statistically significant effects.The studies reporting a significant main effect of group included four (29%) addressing mental health stigma, two (25%) physical health or not health-related stigmas and none (0%) multiple stigmas (n = 1).The interventions applying indirect contact (n = 10) reported, of all SC types, a significant effect of group (40%) most often.Concerning interaction effect, twenty (65%) of the comparative studies did not report on this.Of the eleven studies that did, eight (73%) showed statistically significant interaction effects hence stigma reduction, addressing mental health stigma (n = 4, 29% of all eligible • Efforts to equalise relationships between HIV+ and HIV-facilitators [83] • Role reversals to minimise power relations between doctor and patient [68] [25] Contact is supported by authorities or law 7 • The intervention was conducted at the university, indicating that the institution was encouraging the event [79] • Strongly acknowledged role of PWLE in the project by the organisation [89] [25] The different groups in contact share a common goal 4 • The healthcare workers and the service users have a common goal, namely good services [80] • The participants in this intervention are younger and older colleagues and have the same goals, namely creating business applications [37] [25] There is intergroup cooperation/no competition 4 • Constructing a new context in which health service workers and service users plan (stigma reduction) activities together [90] • Participants are encouraged to engage in respectful, positive intergroup contact [79] [25] The session is interactive/there is discussion 9 • Using the principles of participation for collective learning [89] • The promotion of group discussions [93] [40] Contact strategy uses 'pretend play' to make it less formal 1 • This intervention turns the PWLE into 'books' and the target audience into 'readers', due to which both pretend to be something else [79] Frequent/multiple contact moments 3 • This intervention uses multiple forms of social contact, namely testimonies and participatory videos [100] • This intervention applies various forms of indirect contact, e.g.PWLE celebrities, a testimony of a person in recovery, and personal testimony of a colleague [57] [102]
Contact Atmosphere
Contact is supported by high levels of intimacy 3 • This intervention learned about the importance of meaningful contact between HIV + and HIV-facilitators [89] • This intervention ensured the groups were small for intimate, honest contact [86] [ The contact takes place in a friendly/ informal setting 5 • Where needed, this intervention was conducted in the home of the target audience, to make use of the comfort of the home [63] • This intervention creates a story in which the participants are interacting positively and become friends [93]
Contact Content
The contact is led and informed by the local context 5 • This intervention has investigated 'what matters most' to the target audience (healthcare workers) and informed the strategy accordingly [80] • This intervention has conducted an exploratory study to understand the context and make choices for strategies [100] • This intervention integrated feedback on the quality of radio dialogues to modify the content [32] PWLE are presented as peers/humans instead of patients 2 • This intervention elevated the visibility and status of service users, to be seen by healthcare workers as skilled members of society [80] • This intervention emphasised the position of PWLE in their own right [67] The message concerns PWLE in recovery 5 • This intervention included a community member who recovered from a mental health condition, and had vignettes describing similar themes of recovery [69] • This intervention identified, through What Matters Most, that recovery is an important theme, which they included in their myth-busting [80] [ 102,104] Perspectives: PWLE profile PWLE involved (only moderately) disconfirms the stereotype 7 • This intervention included video clips of a PWLE who disconfirmed the stereotype of a person with albinism by having success [62] • These interventions included realistic views of PWLE by including struggles [67,75] [ 104,105] PWLE are similar to the audience, e.g.age 5 • This radio intervention connected men to a male PWLE and women to a female PWLE [32] • These interventions ensured that the PWLE resource person were of the same age and socio-economic status as the target audience [67,75] Perspectives: PWLE preparation mental health stigma studies), not health-related stigmas (n = 3, 38%) and multiple stigmas (n = 1, 100%).Three interventions (10%), of which two applied direct contact (one mental health and one non health related stigma) and one used indirect contact (physical health stigma), showed no statistically significant interaction effects.See 7 for an overview of interaction effects per stigma category and social contact type, respectively.Two interventions (25%), one addressing multiple stigmas and the other a not health-related stigma, explicitly applied positive factors to improve SC, while the other interventions did so implicitly (n = 3, 38%) or did not mention it at all (n = 3, 38%).The SC component within the other interventions took between 18 minutes and six hours (n = 6, 75%), multiple days over a longer period (n = 1, 13%) or the duration was not reported (n = 1, 13%).
Quality assessment according to JBI.Of the 44 studies and their accompanying main publications, three (7%) was of low, thirty-nine (89%) of moderate and two (5%) of high quality (see Table 1 and S3 Text).Although studies scored well on multiple aspects (see S3 Text), several points deserve additional attention.Within studies using a quasi-experimental design (n = 25, 57%), appropriate statistical analysis (n = 18, 72%) and completion of follow-up (n = 12, 48%) were limitedly reported.In studies applying a RCT design (n = 16, 36%), it was often unclear how different stages of blinding (n = 15, 94%), reliable outcome measurement (n = 15, 94%) and concealment of allocation (n = 8, 50%) were performed.None (0%) of the studies with qualitative methods (n = 6, 14%) reported on the position or influence of the
Explorative qualitative research
To explore expert perspectives and support and enrich the findings of the systematic review, we conducted six semi-structured individual interviews with stigma reduction researchers and/or practitioners with experience with SC strategies.Interviews lasted 42-54 minutes each.Of the respondents, four (67%) were female.Two respondents originally came from a LMIC.To avoid traceability and safeguard anonymity of respondents, no further demographic details are provided.
Considerations for SC type.Some respondents mentioned that the context influenced the choice for a specific SC type to fit content and contact type with the target group.They stated that they consciously chose to adapt the contact strategy and content into an engaging form for the target group.One interviewee argued that they consciously chose to apply imagined contact due to the conflictual nature of the setting.Others mentioned practical considerations such as lack of presence of PWLE and therefore the limited possibility to create direct contact or limited available resources like screens for showing video testimonials.Other context-related practicalities included costs, required permissions and available time of the implementing organisation.
Contact in general: Content, process, atmosphere and sustainability.All respondents emphasised the importance of carefully considering the context in which contact takes place, and the need to adapt SC to the context in content and process.They stressed that each situation and culture is different, with stigma experienced/expressed differently.The majority stated undertaking explorative studies to investigate the context to inform the development of the contact strategy was of major importance.
While a minority explicitly stated to have incorporated Allport's classic conditions for positive contact, almost all respondents referred to these factors to a certain extent.Institutional support and having a clear goal were predominantly mentioned.One interviewee indicated that they completely built their work on Allport's theory.Some respondents mentioned the importance of creating realistic contact scenarios and positive contact and to disconfirm stereotypes.Some respondents stated the importance of recognition with PWLE.One interviewee explicitly expressed the preference to create contact with peers instead of famous PWLE.
The majority argued that a good atmosphere contributes to the quality of contact.A few expressed the balance between informal (i.e.unstructured) and formal (i.e.structured) contact, and observed that informal contact moments such as having lunch (walks) and having fun together contributed to the quality of interaction.Another suggestion was to create small groups to ensure higher quality of contact.
When developing the contact strategy, the majority stressed to consider sustainability.Some noted that their studies focused predominantly on research, with sustainability not as a guiding issue.Nevertheless, one interviewee stressed that it would be useless to create an intervention which has no potential outside of research.The majority underlined to consider scaling the approach.
Contact from the PWLE perspective: Preparation, participation, harm mitigation and monitoring.All respondents emphasised the importance of considering the perspective of PWLE in the SC strategy.Some explicitly addressed these perspectives in their explorative studies to investigate their opinions, needs and views, while others had not incorporated PWLE perspectives but stated its importance.
There were some suggestions to empower PWLE before taking part in the contact strategy.Some respondents stressed that PWLE should feel comfortable to disclose and talk about the stigma.One interviewee argued they learned during intervention try-out that family involvement contributed to support of PWLE and advised to include this.A few explicitly mentioned to have trained PWLE beforehand, and stressed the importance: Yes, so a lot of preventive measures are there and then it goes on into an individual level, as well as to the family level as well.And then when I say individual level, that I mean, like during the training, the training sessions that I talked about, where they learn how to tell their stories, and how and all of those things, we also ended up, including, we also ended up including a lot of sessions on selfcare, you know, because when they're telling about their stories, most of them are telling stories that are traumatic to them.
One respondent shared a dilemma to what extent PWLE may be instructed on what to say to the target audience to create the most impactful contact, as this instruction might limit PWLE in their freedom and autonomy to speak about their own experiences: . . .very often people with [condition] started talking about how difficult they were having it.And then I thought, no, that's, you know that, it's. ... I'm very sorry, but, that's very bad and you should be able to share this, but if you want to change someone's view, and you are going to tell how bad everything is for you, then they don't think "oh, this is actually a human like you and me", actually that doesn't do that much.
A frequently mentioned concern was to mitigate harm of PWLE, in line with their heightened vulnerability.One interviewee indicated that they encountered unexpected disclosure, and advised to be prepared for unexpected events when applying SC.The majority expressed a perceived risk of unintended consequences while employing SC, whereas only a minority indicated not having encountered or thought about such unintended consequences.Respondents were concerned and struggled with the notion that contact might increase stigma.
Some stated that it is imperative to evaluate the SC, whereas others did not reflect on this.Some respondents stressed the importance of after-care for PWLE or at least to check how PWLE have experienced the contact.
Contact from the perspective of the target audience.A few shared that it was also important to evaluate how target groups without the stigmatised characteristic had experienced the contact intervention and to check if they leave the intervention with the intended messages: It's just really important to evaluate as well, and to keep doing so.Because you just see a lot in contact interventions, with contact interventions there with people with [condition], that people with [disease] literally said: "no, but I can see everything just fine", and then people afterwards thought that that person would become blind.I don't know where they got that from, but that is very important to keep evaluating all the times.Contact from the perspective of the implementing organization.The challenge to motivate the implementing organisation to engage in contact was frequently mentioned.A common view amongst interviewees was that it was crucial to align with the wishes of the organisation, and to make sure that creating contact was something they sought for as well.In their experiences, this increased the motivation of other stakeholders to engage.Almost all respondents expressed the value of creating and cultivating good relationships with implementing organisations.Most suggested embedding the SC strategy within existing structures, such as existing classes at school, for greatest chance of success.
Discussion
This paper provides an overview of SC stigma reduction interventions, employed across stigmas, populations and settings in LMICs, through a systematic review and expert perspectives.To the best of our knowledge, this is the first systematic review that summarises SC intervention research across stigmas and SC types.
This systematic review demonstrates that SC is a stigma reduction strategy applied across stigmas and settings, with almost half of the interventions addressing mental health stigma.The across-stigma application of SC supports recent calls to look beyond isolated stigmas in the development and implementation of stigma reduction strategies [12,39].There is no substantial discernible trend between stigma and SC type, although collaborative activities were foremost employed among physical health stigmas.Indirect and direct SC were mostly described in studies, while the more distant SC types, imagined or vicarious contact, were applied limitedly.Strikingly, none of the studies used online SC, also called E-contact, although interesting examples in HICs exist to bring people together online to reduce transgender stigma [106] and schizophrenia stigma [107].Triggered by the worldwide Covid-19 pandemic, online SC could be an avenue to further explore, also in LMICs as internet accessibility is on the increase [108].Children were underrepresented; echoing stigma reduction interventions in LMICs in general [15].Another reason may be that a meta-analysis identified SC as more effective for adults than for children [109], which might have resulted in decisions not to apply SC among children.Another meta-analysis however concluded that imagined contact was more effective for children than adults and proposed imagined contact as a key component of child-focused education-based stigma reduction strategies [110].Of the two studies in our review which targeted children specifically, Tercan et al. (2021) showed no statistically significant stigma reduction [93] and Nistor et al. (2021) did not conduct statistical analysis [58].We cannot draw conclusions on the effectiveness of the other studies targeting children next to (young) adults, as analysis was not age-stratified.
Most of the interventions were culturally adapted to a certain extent.This is key to ensure that interventions are relevant in the local context [16] and was identified as a core component for effective stigma reduction interventions [14].While contrasting a recent scoping review in which only 20% of the included studies considered cultural values, meanings or practices [111], our finding confirms another recent review where half of the interventions were, to a certain extent, culturally adapted [14].However, for most of the interventions, no or very few details were given how the intervention was made to align to the local context.
Although almost all SC interventions included in this review-across the various SC types and stigma categories-reported statistically significant stigma reduction and as such echo multiple reviews highlighting SC as a promising and effective strategy [14,18,28].This finding should be seen in the light that effectiveness was often reported inconsistently or incompletely.Additionally, while interventions using direct and indirect contact were most effective, this was only the case in about one-third of the interventions applying these SC types.Conclusions on effectiveness can only be drawn with a caveat.We found that the majority did not accurately report on time and/or group effects (i.e.time effects were not analysed irrespective of group and group effects were not analysed irrespective of time).Moreover, interaction effects were often not reported, although data to calculate these interaction effects were available.Additionally, we cannot rule out that studies that did not show positive results were all published [112].The overall reported statistically significant stigma reduction might point to a risk of publication bias [14].Altogether, this implies that the conclusions on effectiveness need to be viewed with caution, which is in line with a recent study that contested the evidence-base of mental health contact-based stigma reduction interventions [23].The included studies which reported statistically significant interaction effects, consisting of interventions addressing mental health, not health-related and multiple stigmas and using direct, indirect and a combination of SC, were all of moderate quality, impeding the quality of evidence.
Our review has demonstrated that only few studies considered the perspective of PWLE in the SC intervention, and/or measured the effects of the SC interventions on PWLE, as explored recently [113].This finding, confirming an earlier review on prejudice reduction [114], is striking as PWLE are key resource-persons in SC.It significantly contrasts with the idea of "nothing about us without us" [115].In the recent Lancet Commission on ending stigma of mental health conditions, it is also emphasised that PWLE "need to be strongly supported to lead or co-lead interventions that use SC" [10].Two important remarks can be made on the base of our study.First, all experts within our qualitative research emphasised that PWLE should be meaningfully involved in developing and implementing the SC and stressed the importance of preparing, monitoring and evaluating SC with PWLE.Second, multiple of the included studies recommended how PWLE can be better prepared e.g. by involving family and friends, recognised the demand social contact can have on PWLE, or underlined the importance to monitor and evaluate with PWLE.This is supported by studies indicating that participating in SC can strengthen the social coping skills to deal with stigma, improve selfesteem and enhance personal empowerment of PWLE [36,113]; without losing sight of potential negative consequences [36].
We paid specific attention to the application of positive factors regarding applying SC.Strikingly, more than half of the included studies have not described if and how they embedded positive factors to improve their social contact intervention and the quality of the social contact, and of those that did, most did so implicitly.No conclusions on the impact of positive factors could be drawn.Several criticisms concerning positive factors have been mentioned, for example that, in the real world, ideal conditions for SC do not exist [116].In our review, for example, some included studies argued that 'equal status' could not be created [79,80].Importantly, it is the quality of contact that matters, rather than simply ticking the box of contact.One should be aware that facilitating SC does not necessarily result in positive interaction, and might even increase stigmatisation [24,36,42].Participating in SC as PWLE might result in a more vulnerable position such as potential negative effects to self-disclosure [36].This calls for careful reflection and development, together with PWLE, before bringing SC into practice.
Our study provides an overview of all factors applied and lessons learnt distilled from the included studies and interviews (see Table 2).These considerations are not exhaustive and directive for SC strategies, as contexts and realities differ.Rather, they should be seen as inspiration and guidance for critical reflection when considering SC.
This study includes the following strengths.First, it synthesises knowledge on SC used as stigma reduction strategy across stigmas, building on recommendations to identify cross-cutting features of stigma [12,39].Second, extensive search methods were applied, contributing to thorough inclusion of literature.Third, we complemented the systematic review with expert perspectives.To our knowledge, this is a new methodological contribution within systematic reviews, and offers additional and in-depth insights.Lastly, during the data extraction and synthesis of the systematic review and the data analysis of the qualitative study, all data were analysed by two researchers to minimise subjectivity, which contributes to the reliability of this study.
Several limitations are recognised.First, we excluded studies targeting two-way prejudice, as it did not meet our stigma definition which is based in power.Second, we have only been able to interpret what has been reported, therefore we might have missed information when studies did apply positive factors but not reported upon it.The studies greatly varied in what they reported on SC strategy details.The explorative interviews mitigated this potential gap of knowledge.Moreover, we analysed the publication on what they explicitly, but also implicitly, reported on.Although these interpretations were checked by two researchers, it might be prone to interpretation errors.We therefore recommend future researchers to report more in detail about their intervention content and process.Third, we did not assess the validation process of the measures and did not explore the secondary benefits of effective stigma reduction thanks to SC, such as health impacts, as this was beyond the focus of this systematic review.Fourth, six first/corresponding authors were interviewed: only two came from a LMIC.Nonetheless, everyone worked from a specific LMIC context and worked in different types of SC across stigmas.As a final limitation, we did not interview PWLE.
Conclusions
This study has provided an overview of SC stigma reduction interventions across stigmas, populations, and LMICs.Most of the interventions focused on mental or physical health-related stigmas and adult populations and applied foremost indirect and direct social contact.This review identified a challenge that effectiveness was often invalidly reported, overshadowing the conclusions that most interventions reported statistically significant stigma reduction.Therefore, while direct and indirect contact interventions showed the best results, no definitive statements can be made that any SC type is more effective than others.Similarly, no conclusions can be drawn that SC works better for a specific stigma category.To better understand the potential of SC as a stigma reduction strategy, we recommend 1) improving effectiveness reporting, including interaction effects and effect size and 2) including the under-reported effects of SC on PWLE.To understand the effects on children, we further recommend stratifying according to age.This review provides an overview of all included positive factors applied and lessons learnt to strengthen SC, which can be used as a set of considerations (adapted to each specific context) when developing and/or applying future SC to reduce stigma.We highly recommend future researchers to report in more detail on development, processes, content, positive factors and evaluation of SC strategies.Future SC research should pay attention to the controversies in the field.From an ethical perspective, participation of PWLE, as a key population in SC strategies, should be central to future research and SC strategies.
Fig 1
presents a PRISMA flow diagram of the screening and selection process.
Fig 5
for an overview, per stigma category (left) and contact type (right), of the percentage of eligible studies reporting significance for a main effect of time and group, and interaction effects.The eight interventions which showed a statistically significant interaction effect were conducted in Turkey (n = 4, 50%), China (n = 3, 38%) and Ghana (n = 1, 13%).Six (75%) combined SC with another stigma reduction strategy (education).Four of the interventions applied direct contact, demonstrating effectiveness in 25% of the interventions using direct contact (n = 12).Indirect contact was used in three interventions, showing effectiveness in 30% of the interventions applying indirect contact (n = 10).One of the five interventions combining SC types demonstrated effectiveness.See Figs 6 and
Table 1 .
(Continued) Clay et al., 2020. Controlled Trial; QE = quasi-experimental study; NC = non-comparison (single-arm) intervention study2Approximate duration indication, estimated by researchers.3Culturaladaptation:(1) the intervention was at least partially originated from the local context; (2) the intervention was pre-tested/piloted/field-tested, or (2a) the intervention was piloted but it was unclear how this was done; (3) local beliefs, perceptions, and/or myths were taken into account; (4) local customs, cultural norms, resources, and/or habits were used to embed the intervention; and (5) translation of the intervention only.Built onClay et al., 2020.
Table 2 . Positive factors applied and learned in included SC interventions (n = 44) as considerations. Positive SC factors applied in included studies Examples from interventions Corresponding frameworks Contact Process
PWLE and the target audience have equal status7 | 2024-03-29T05:11:11.929Z | 2024-03-27T00:00:00.000 | {
"year": 2024,
"sha1": "a73f55788442cc14e5846d428f285d4b19506437",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0003053&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a73f55788442cc14e5846d428f285d4b19506437",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
80850993 | pes2o/s2orc | v3-fos-license | An 8 1⁄2 year old girl presented with pain abdomen with hypertriglyceremia
This article has no abstract. The first 100 words appear below:A 8 ½ year old girl, 1st issue of non-consanguineous parents, from Norshingdi immunized as per EPI presented at the outpatient department with the history of abdominal pain for 5 days. The pain was located in the epigastric region and dull in nature. There was no aggravating or relieving factor and no radiation and persisted all the days. The pain had no relation with the food. She had also the history of vomiting for several times for the same duration which occurred usually after feed and contained food particle and not mixed with blood or bile and it was not projectile.
Presentation of Case
Dr. Zannatul Ferdous Sonia: A 8 ½ year old girl, 1 st issue of non-consanguineous parents, from Norshingdi immunized as per EPI presented at the outpatient department with the history of abdominal pain for 5 days.The pain was located in the epigastric region and dull in nature.There was no aggravating or relieving factor and no radiation and persisted all the days.The pain had no relation with the food.She had also the history of vomiting for several times for the same duration which occurred usually after feed and contained food particle and not mixed with blood or bile and it was not projectile.She had no history of fatigue, weight loss, polyuria, polyphagia, polydypsia, respiratory distress or constipation.She had the history of similar type of attacks for repeated times for the last 7 months.The pain was so severe that she needed hospitalization for several times.Her younger brother is healthy and there is no family history of such type illness.On examination, she was afebrile, anicteric, mildly pale, and vitals were within the normal limit.Anthropometrically she was normal and the BCG mark was present.Abdominal examination showed epigastric tenderness.There was no organomegaly and ascites was absent.Other systemic examinations showed normal findings.
For her complaints she was admitted in a private hospital and underwent some investigations like serum electrolytes, amylase, lipase, alkaline phosphatase, alanine aminotransferase, calcium, random blood sugar, fasting triglyceride, ultrasonography of the whole abdomen, etc (Table I).Her reports showed severe hypertriglyceremia with raised serum amylase, lipase and blood sugar.The ultrasonography report showed a single calcification in the right lobe of liver measuring 6.6 mm.She was treated with intravenous fluid, injectable proton pump inhibitor and ceftriaxone but her condition didn't improve and was then referred to this tertiary level hospital for better management.
After admission in the Bangabandhu Sheikh Mujib Medical University, the blood was sent for complete blood count, fasting lipid profile, serum T4, TSH, plain X-ray of abdomen in erect posture, chest X-ray and Mantoux test (Table I).
Differential Diagnosis
Dr. Afsana Yasmin: As peptic ulcer diseases also present with similar type of clinical features, I thought it could be a case of peptic ulcer disease.
Peptic ulcer disease
Peptic ulcer diseases can occur in children, but it is uncommon in pediatric age group.In children, it may be due to primary or secondary causes.Among the primary causes Helicobacter pylori infection is the commonest.Other primary causes are Zollinger-Ellison syndrome, G-cell hyperplasia, systemic mastocytosis, short bowel syndrome, hyperparathyroidism, etc. Secondary ulcers are more common than primary which have bad prognosis.Secondary ulcer may develop due to stress, sepsis, trauma, burn, type 1 diabetes mellitus, drug, etc. 1 Among the drugs NSAID, steroid, immunosuppressive drugs, etc can cause ulcer. 2 Stress ulcer is more common in children of less than 4 years of age and the primary ulcer is more common above 4 years of age and primary ulcer may recur even after treatment.The common symptoms are gas, bloating, nausea, vomiting, epigastric pain, abdominal discomfort which usually increase at empty stomach or after meal. 2 Gastrointestinal bleeding may occur with long standing epigastric pain but painless bleeding may be the only manifestation of ulcer.Primary peptic ulcers in children usually presents between the ages of 8 to 17 years, commonly around 12 years, usually the cause is Cite this arti le: Nahar L, Kari ASMB, Ruku uzza a M, Yas i A. A ½ year old girl prese ted ith pai a do e ith hypertrigly ere ia.Ba ga a dhu Sheikh Muji Med U i J.
Copyright:
The opyright of this arti le is retai ed y the author s [Atri utio CC-By .] A aila le at: .a glajol.ifo In this patient, epigastric pain with no radiation along with vomiting of undigested food and epigastric tenderness goes in favor of peptic ulcer disease, but no relation with food and not relieved by antiulcerant goes against the peptic ulcer disease.
To exclude peptic ulcer disease, we have done anti-H.pylori antibody (IgG) and upper GI endoscopy.Her anti-H.pylori antibody (IgG) was negative and upper gastrointestinal endoscopy findings were normal.So, it was not a case of peptic ulcer disease.Acute pancreatitis is diagnosed by the presence of at least two of the following three criteria 14-17 :
Diabetic ketoacidosis
1. Characteristic abdominal pain of acute onset, severe dull ache, especially in the epigastric region may radiate to back, aggravated by eating or drinking, usually after taking fatty food and relieved by lean forward or knee-chest position. 18 Serum amylase and /or lipase level at least more than 3 times of upper normal limit.
Positive imaging findings.
There are several etiologies of acute pancreatitis.Among the metabolic causes hypertriglyceremia is one of established cause of pancreatitis.The complication of HTG are pancreatitis, cardiovascular diseases, non alcoholic fatty liver disease etc. 22 HTG is a well established risk factor for pancreatitis and the risk is about 1-4%.Usually primary lipid disorder is due to genetic defect of triglyceride synthesis and metabolism. Type I hyperlipoproteinemia indicates the presence of chylomicrons, normal total cholesterol with very high level of triglyceride; whereas type V hyperlipoproteinemia is defined as the presence of chylomicrons and very low density lipoproteins with increased level of cholesterol and very high level of triglyceride.
Hypertriglyceridemia
Type 1 hyperlipoproteinemia is mainly due to genetic deficiency of lipoprotein lipase (LPL) or other related proteins as apolipoprotein C2, A5, lipase mutation factor 1 (LMF1) and glycosyl-phosphatidylinositol-anchored high-density lipoprotein binding protein 1 (GPIHBP1.It is diagnosed by genomic DNA analysis for APOA5, APOC2, LMF1 and GPIHBP1 and immunoblotting method to detect serum LPL autoantibody.
This patient presented with recuurent attacks of acute abdominal pain favorable for pancreatitis and no suggestive history for peptic ulcer diseases or diabetic ketoacidosis.Biochemically she had raised serum amylase, lipase and severe hypertriglyceremia.As there is no evidence of secondary causes of hypertriglyceremia; so, probably it is a case of primary hypertriglyceremia.
In this patient, triglyceride level was very high with normal total cholesterol and LDL couldn't be detected by Friedewald's formula, HDL was normal.She had acute pancreatitis.So, probably it is Type 1 hyperlipoproteinemia.But we couldn't confirm it due to lack of facilities to do genetic and antibody analysis.
Dr. Nahar's Diagnosis
Acute pancreatitis due to primary hypertriglyceremia
Discussion
Dr. A. S. M. Bazlul Karim: The term "recurrent acute pancreatitis" was first introduced in 1948 in medical literature by Henry Doubilet. 25According to INSPIRE criteria it is defined by at least two separate episode of acute pancreatitis with absence of irreversible, structural changes in the pancreas.
The patient must have to meet the criteria of acute pancreatitis after the first attack. 26st common causes of recurrent acute pancreatitis are bile duct stone or sludge, sphincter of oddi dysfunction, anatomical abnormality of pancreatic tree or ductal stone, genetic mutation, hyperlipidemia, hypercalcemia, ascariasis, autoimmune pancreatitis, drug such as azathioprine, mercaptopurine, sulphonamide, etc.Even organophosphorus compounds can cause acute pancreatitis.The girl had similar type of attacks before admission but there is no biochemical proof of pancreatitis.From history it is assumed that she could have previous attacks of pancreatitis.Then the case can not categorized as acute recurrent pancreatitis.
Treatment of acute recurrent pancreatitis is the same as acute pancreatitis.Initial treatment include rest to pancreas by limiting oral intake, aggressive intravenous hydration, and pain management; along with treatment of underlying cause and complications. 20. Md.Rukunuzzaman: Type 1 hyperlpoproteinemia also called familial hyperchylomicronomia.It may be due to environmental influence or may be genetic.The disorder tends to run in families.We have done fasting lipid profile of patient's mother and it was near normal limit.
The presentation of type 1 hyperlipoproteinemia are recurrent acute pancreatitis, eruptive xanthomas, lipema retinalis, hepatosplenomegaly, abnormal liver enzymes, etc. 30 When patient presents with severe hypertriglyceremia, serum turn to creamy appearance which is called lipemic serum. 31pema retinalis is a transient change of retina as creamy appearance of retinal blood vessels due to lipid infiltration and may decrease visual acuity.Lipid lowering therapy may turn to normal fundus and visual acuity.
This patient attended to us on 3 rd day.She had lipemic serum (Figure 1) and eye findings were normal Dr. Md.Wahiduzzaman Mazumder: Hypertriglyceremia lead to triglyceride rich chylomicron sludge in the capillary bed which cause ischemia of pancreas and pancreatic lipase release from damaged pancreatic acini.Further production of FFA leads to free radical damage and inflammation which cause premature activation of trypsinogen to trypsin and activation of other enzymes resulting pancreatitis.Noncompliance of low fat diet may cause recurrent pancreatitis in type 1 hypertriglyceremia.Recurrent pancreatitis can cause chronic exocrine and endocrine pancreatic insufficiency. 19ere may have relationship with some genetic predisposition to hypertriglyceremia induced pancreatitis as like mutations in cationic trypsinogen (PRSS1), serine protease inhibitor Kazal type 1 (SPINK1), cystic fibrosis transmembrane conductance regulator (CFTR), and tumor necrosis factor superfamily member 2 (TNF2). 20. Karim: Acute management of severe hypertriglyceremia is done by fasting, insulin or heparin infusion or plasmapheresis. 32patient of severe hypertriglyceremia should be kept nothing by mouth with intravenous fluid.Diet increases further chylomicron production and exacerbate hypertriglyceremia.So, fasting initially prevent chylo-micron production and is helpful for gradual clearance of chylomicron and significant reduction of hypertriglyceremia within one or two days.When triglyceride level is at 1,000 mg/dL and there is no abdominal pain then fat free oral diet can be started. 32sulin infusion reduce the triglyceride level rapidly, it can be given by continuous or subcutaneous infusion.Intravenous insulin infusion along with fasting will reduce triglyceride level up to 80% in first 24 hours.Continuous insulin can be given at 0.1-0.3U/kg/hours in dextrose infusion to maintain blood glucose level between 140-180 mg/dL.In diabetic patient continuous insulin infusion at 0.5 to 1 IU/kg/hours can be used with others supportive management. 32case report of nondiabetic adolescent patient with severe hypertriglyceremia, subcutaneous infusion bolus dose of regular insulin at 0.1 U/kg decreased triglyceride level rapidly. 33parin also release lipoprotein lipase from muscle, adipose tissue and endothelium to circulation and hydrolyze the triglyceride.This effect is short lived and is quickly followed by increased hepatic LPL degradation.Therefore heparin is not routinely recommended to treat hypertriglyceremia in children.Long-term management of hypertriglyceremia needs lifestyle changes including dietary restriction of fat <10-15% along with weight reduction, increasing physical activity and pharmacotherapy.
Fibrate (gemfibrozil and fenofibrates) are the first line drugs to treat hypertriglyceremia.Gemfibrozil 600 mg twice daily can be used.Adverse effect may be cholesterol gall stone.Fibrate is contraindicated in renal impairment and gall bladder diseases.Dr. Nahar: We advised to restrict dietary intake of fat, increase physical activity along with pharmacotherapy as fibrate, niacin, omega 3 fatty acid, cholestyramine and anti-oxidant.
Dr. Mohua Mondol: What is the explanation of hepatic calcification in this case?
Dr. Nahar: The most common cause is hepatic tuberculosis.Others etiologies are sarcoidosis, hemangioma, echinococcal cyst, hepatic adenoma, etc.She had no history of fever, weight loss, contact with TB patient and her chest X-ray was normal and Mantoux test was negative.Hepatic calcification in this case may be due to a dilated intrahepatic bile ductule.
Dr. Kamrun Nahar: Why there is gall bladder sludge and is it may the cause of pancreatitis?
Dr. Nahar: This patient was initially kept nothing by mouth.So, less contraction of gall bladder may the cause for gall bladder sludge.She received injectable Ceftriaxone for two days in a private clinic.Ceftriaxone also may be responsible it.She had also severe hypertriglyceremia.So, I think sludge is not the underlying cause.Dr. Nahar: Though acute pancreatitis is a stressful condition.Human body regulates the stress response by hypothalamic pituitary adrenal axis and releases more adrenaline, catecholamine, cortisol to reduce the mortality.At the same time, body maintains glucose intolerance and insulin resistance.So, this patient had hyperglycemia and no measures were taken to reduce glucose level.On subsequent follow-up, it was within normal limit.
Follow-up
Acute recurrent pancreatitis need long-term followup.Our plan to follow-up the patient 2, 4 and 8 weeks after discharge by doing fasting lipid profile (Figure 2).
Final Diagnosis
Acute pancreatitis due to primary hypertriglyceremia Dr. Habibur Rahman: Why initially patient had hyperglycemia and what measures were taken?BSMMU J 2018; 11: 161-167
Figure 2 :
Figure 2: Triglyceride level (upper), random blood sugar (middle) during hospital period and triglyceride during follow-up after discharge (lower)
3-4
Cushing ulcer associated with brain tumor or brain injury which are typically single, deep which are prone to perforation.Stress and spicy food don't cause ulcer but exacerbate the symptoms.
5
Diagnostic examinations include urease breath test, stool tests for H. pylori detection, anti-HP antibody (IgG) and upper gastrointestinal tract endoscopy.
Table I Laboratory investigations of patient on admission
13ute pancreatitis: Acute pancreatitis is a reversible process characterized by the histological evidence of inflammation of pancreatic parenchyma, interstitial edema, infiltration of inflammatory cells with variable degrees of cellular apoptosis, necrosis and hemorrhage.13
32, 33
Plasmapheresis is another option to treat severe hypertriglyceremia.In children, there are only few | 2018-12-16T06:31:12.829Z | 2018-05-29T00:00:00.000 | {
"year": 2018,
"sha1": "e36d21bc3c7d7c4013e20afc530082790e8501c5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3329/bsmmuj.v11i2.36509",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e36d21bc3c7d7c4013e20afc530082790e8501c5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220604398 | pes2o/s2orc | v3-fos-license | Histopathological distinction of non-invasive and invasive bladder cancers using machine learning approaches
Background One of the most challenging tasks for bladder cancer diagnosis is to histologically differentiate two early stages, non-invasive Ta and superficially invasive T1, the latter of which is associated with a significantly higher risk of disease progression. Indeed, in a considerable number of cases, Ta and T1 tumors look very similar under microscope, making the distinction very difficult even for experienced pathologists. Thus, there is an urgent need for a favoring system based on machine learning (ML) to distinguish between the two stages of bladder cancer. Methods A total of 1177 images of bladder tumor tissues stained by hematoxylin and eosin were collected by pathologists at University of Rochester Medical Center, which included 460 non-invasive (stage Ta) and 717 invasive (stage T1) tumors. Automatic pipelines were developed to extract features for three invasive patterns characteristic to the T1 stage bladder cancer (i.e., desmoplastic reaction, retraction artifact, and abundant pinker cytoplasm), using imaging processing software ImageJ and CellProfiler. Features extracted from the images were analyzed by a suite of machine learning approaches. Results We extracted nearly 700 features from the Ta and T1 tumor images. Unsupervised clustering analysis failed to distinguish hematoxylin and eosin images of Ta vs. T1 tumors. With a reduced set of features, we successfully distinguished 1177 Ta or T1 images with an accuracy of 91–96% by six supervised learning methods. By contrast, convolutional neural network (CNN) models that automatically extract features from images produced an accuracy of 84%, indicating that feature extraction driven by domain knowledge outperforms CNN-based automatic feature extraction. Further analysis revealed that desmoplastic reaction was more important than the other two patterns, and the number and size of nuclei of tumor cells were the most predictive features. Conclusions We provide a ML-empowered, feature-centered, and interpretable diagnostic system to facilitate the accurate staging of Ta and T1 diseases, which has a potential to apply to other types of cancer.
Background
Bladder cancer is one of the most common malignancies in the world, with nearly 550,000 newly diagnosed cases and 200,000 deaths of this disease estimated in 2018 [1]. Approximately 90% of bladder cancers are urothelial carcinomas that arise from epithelial cells lining the inside of the bladder. Roughly three-fourths of urothelial carcinomas are non-muscle invasive [2]. According to the current WHO classification system, non-muscle invasive bladder cancers (NMIBCs) can be divided into three groups: Ta (non-invasive papillary), Tis (carcinoma in situ), and T1 (invasion into subepithelial connective tissue/lamina propria), which account for approximately 70, 10, and 20% of NMIBC, respectively [2,3]. Ta and Tis tumors are confined to the urothelium and have not penetrated the basal membrane. In particular, Ta tumors often present as low-grade lesions that can often be managed conservatively [2][3][4]. By contrast, T1 tumors are mostly high-grade and have the potential to progress to muscle invasion and extravesical dissemination [3,5]. In general, NMIBCs have a favorable treatment outcome with the five-year survival rate up to 90%, whereas muscle-invasive bladder cancers have a less favorable prognosis with 30-70% five-year survival rate [6]. Clearly, accurate diagnosis of non-invasive (Ta) versus invasive (T1) bladder cancers is vitally important and will help clinicians to make a timely and appropriate treatment plan for patients.
To date, the detection of bladder cancers mainly depends on the cystoscopic examination of the bladder and biopsy/resection of the tumor as well as urine cytology [7]. Currently, no molecular biomarkers accurately stage Ta and T1 tumors. Histological assessment remains a vital tool to differentiate the T1 disease from the Ta disease.
Although several histological features suggestive of tumor invasion have been identified (see below), Ta and T1 tumors sometimes look very similar under microscope, making the distinction very difficult even for experienced pathologists. As an illustration, 235 bladder tumors initially diagnosed as T1 tumors were restaged as being Ta (35%), T1 (56%), "at least" T1 (6%), and ≥ T2 (3%) diseases by an experienced reviewer [8]. Obviously, there is considerable room for improvement of inter-observer agreement by developing objective methods.
Computerized image processing technology has been shown to improve efficiency, accuracy and consistency in histopathological slide evaluation and provides a novel diagnostic tool to the practice of pathology [9]. Automated analysis systems have been developed to quantitatively capture morphological features of histopathological images to predict the outcome of breast cancer [10], neuroblastoma [11], lymphoma [12], lung cancer [13], and Barrett's esophagus [14].
Image-based predictive models are further empowered by recent advances in machine learning (ML) and computer vision to achieve expert-level accuracy in medical image classification [15][16][17][18][19][20]. Recent work has shown that a convolutional neural network (CNN)-based deep learning model can achieve 100% accuracy in identifying the presence or absence of breast cancer cells in a whole slide [20]. A similar study from Google Inc. found that a CNN model was able to identify breast cancer better than pathologists [21]. Because training a CNN model from scratch requires a large number of medical images that are often hard to obtain, a highly effective approach to deep learning on small image datasets is to use a pretrained network such as Visual Geometry Group (VGG) [22], which has previously been trained on large imageclassification datasets. However, none of these models are built for classifying bladder cancer images. This lack of computational models severely hampers the application of modern image-based analytic tools to differentiating Ta and T1 diseases.
In this study, we aim to develop a novel MLempowered, feature-centered, and interpretable diagnostic system to facilitate the accurate staging of Ta and T1 bladder tumors. We design a fully automated informatics pipeline to extract quantitative image features from hematoxylin and eosin (H&E)-stained slides (see flowchart in Supplementary Figure S1) and identify microscopic patterns that are important for distinguishing T1 from Ta tumors. Our methods may be not only helpful for the precision medicine of bladder cancer but also extensible to other types of cancer.
Histopathological slides
Upon approval from the Institutional Review Board at University of Rochester Medical Center (URMC), we collected a total of 1177 images from H&E-stained bladder cancer tissues, which included 460 non-invasive (stage Ta) and 717 invasive (stage T1) urothelial tumors. Problematic cases where it was difficult for a group of genitourinary pathologists at URMC to histopathologically distinguish between Ta and T1, as well as muscleinvasive cases (stage T2 and above), were excluded from the analysis. All images were included for image processing and analysis, with the labels of images serving as ground truth. All tumor specimens were obtained by surgical excision and processed by a standardized protocol at the Department of Pathology and Laboratory Medicine at URMC.
Image digitization system
A Leica upright microscope DM5000 B research microscope attached with a high-resolution camera from MacroFire was used to capture the raw H&E-stained images. The camera was able to capture a field of 2048 × 2048 pixels under the 100× magnification. Image files were saved in the ".tiff" format. The central part of the raw images was cropped into 1 to 4 images with 700 × 700 pixels by an ImageJ-based script with the "Crop" function in ImageJ.
Image processing system
The cropped images were then pre-processed with the "FFT" function in ImageJ if needed because this function made the light intensity evenly distributed by normalizing the intensity from the darkest corner to the brightest corner on the image. These pre-processed images were then converted into black and white images to mask the irrelevant areas. The images with lesions of interest were used for feature extraction.
Image feature extraction
The feature extraction was performed using CellProfiler and ImageJ. Both packages enabled us to create and customize pipelines for extracting patterns in the images. ImageJ provides a scripting language MacroJ, which allows the extraction of patterns of interest. In this project, the patterns of retraction artifact, nuclear size, and cytoplasmic color were extracted by ImageJ. CellProfiler provides multiple built-in cellular feature extraction modules. The patterns of connective tissue around the tumor and nuclear shapes in the images were extracted by CellProfiler. Overall, ImageJ extracted textural features in the whole tissue, whereas CellProfiler extracted features of individual cells. For every image, 60 features were extracted by ImageJ and 636 features were extracted by CellProfilers. The spreadsheets containing the features from ImageJ and CellProfiler were merged into a single data frame in R environment.
Statistical analysis and plotting
All statistical analyses in the project were performed using R. To set the bin size for color spectrums, all image pixels were processed and evaluated through R scripts. The performance metrics including accuracy, receiver operating characteristic (ROC), and area under the curve (AUC) were calculated by the functions in the Scikit-Learn package. ROC and AUC are appropriate metrics because our data have imbalanced classes: 460 non-invasive and 717 invasive samples. The plots of ROC curves, cutoff points and AUC scores were generated by the Matplotlib package. The boxplots that compare the performance of ML models were generated by SigmaPlot 12.5.
Data processing methods
Features extracted by ImageJ and CellProfiler were processed and the data were saved in the comma separated value (CSV) format. An R script was written to combine all CSV files and produce a large spreadsheet. The Pandas package was used for data processing and subsetting. The large CSV file was transformed into the Numpy matrix before putting into ML models.
General ML models
Several general ML classifiers used in the project were taken from several Python packages. The probabilistic neural network framework was from the Neupy package. All of the ensemble learning models, including probabilistic neural network (PNN), support vector machine (SVM), logistic regression (LR), bagging (Adaboot), random forest (RF), and multilayer perceptron (MLP), were from the Scikit-Learn package. The datasets were randomly partitioned into a training set (70%) and a testing set (30%). To ensure the robustness of the results, the random partitioning process was repeated 20 times and the mean performance of the 20 tests was used to represent the overall performance of the classifier. The default threshold 0.5 was used for classification.
Training and validation dataset splitting
Random sampling was performed multiple times in each training and validation. Before feeding the data into any machine learning models or deep learning models, the dataset was first balanced by sampling an equal number of 460 invasive cases and 460 non-invasive cases before the training started. The weight of invasive cases is equal to the weight of non-invasive cases. Then within the balanced dataset, data were further split into a 70% training set and a 30% validation set through random sampling. Not image or data appeared in both data set and no duplicates were allowed in the study.
Distinctive microscopic patterns for stage ta versus T1 bladder tumors
At least three morphological features have been identified to distinguish between stages Ta (Fig. 1a-c) and T1 ( Fig. 1d-f) bladder tumors. The first pattern is desmoplastic reaction, which is characterized with dense fibrosis around the nests of T1 tumor cells (Fig. 1d). This pattern is most definite for invasion, but T1 lesions often lack it. The second pattern is retraction artifact, which is the result of tissue shrinkage after dehydration during tissue processing, seen around the nests of T1 tumor cells (Fig. 1e). The third pattern is more abundant, pinker cytoplasm in T1 tumor cells, presumably due to higher uptake of eosin, compared with that of Ta tumor cells (Fig. 1f). Although pathologists usually make a diagnosis of tumor invasion based on these patterns under microscope, a quantitative representation will allow automatic extraction and analysis of the patterns in H&E-stained slides.
H&E-stained slide digitalization, image processing and feature extraction
We obtained 1177 H&E-stained histopathology images of Ta or T1 bladder tumors from the archive in the Department of Pathology and Laboratory Medicine at URMC. To digitize these slides, each image was captured at × 100 magnification with 2048 × 2048 pixels. Although the overall images were very clear, a dark spot was often found at the lower righthand corner. We therefore cropped and tiled the central part of the raw images to get smaller ones with 700 × 700 pixels.
To extract objective morphological information from these images, we used ImageJ and CellProfiler to extract image patterns into numerical numbers. We therefore built nine fully automated image pattern extraction pipelines to capture the above three microscopic patterns. Due to the complexity of pathological images, each pattern consisted of various features. The general procedure of feature extraction is described below. We first masked unwanted areas using methods like color thresholding and matrix subtractions before extracting the features. Since all of the raw images were consistent in staining quality, the parameters for extracting each feature were kept the same across all images. The image features included nuclear size distribution, crack edge, sample ratio, distribution of pixel intensity in the connective tissue and cytoplasm, as well as the shape of connective tissue and nuclei of tumor cells. The numerical representation of the features was outputted in spreadsheets and placed in columns.
For example, to extract the retraction artifact pattern, we developed a pipeline to differentiate two types of non-tissue regions in a H&E-stained image, one was around cells (i.e., small space around cells) and the other was between tissue parts (i.e., large space between tissue edges) (Fig. 2a). To only catch the small space surrounding cells (named "cracks" for simplicity), we first converted an original color image to a monochrome image with black or white color on each pixel and all nontissue regions were in white (Fig. 2b). Then we converted the original color image to an 8-bit grayscale image. The regions with more than 40 pixels in diameters were considered to be the regions between tissue parts, which were shown in white (Fig. 2c). This 8-bit image was then converted to a 1-bit image with black and white colors inverted; now the inter-tissue space was shown in black (Fig. 2d). Then we combined images shown in Fig. 2b and d to get the final image in which the inter-tissue spacer was masked and cracks were shown in white (Fig. 2e). The number of pixels in white regions represented the size of cracks around cells.
We also developed pipelines (Supplementary Figures S2 and S3) to extract features in the pinker cytoplasm pattern and the desmoplastic reaction pattern. Note that each of the three microscopic patterns was extracted separately, and the numeric representation of each pattern was later combined into a large spreadsheet in the CSV format, in which each row represented an image and each column represented a feature. For every image (out of 1177), 740 quantitative features were extracted to represent the three microscopic patterns.
Unsupervised clustering of cancer images
To understand whether extracted features were sufficient to differentiate the histopathological images of Ta and T1 tumors, we set out to conduct a cluster analysis of the features. We first reduced data dimension through principal component analysis (PCA) because, through PCA, we were able to rank top components by their eigenvalues. However, as shown in Fig. 3a-b, plotting the top components with the highest eigenvalue failed to find recognizable clusters. In addition, by performing k-means analysis on PCA components, we found no apparent clusters between k = 2 and k = 9 (Fig. 3c-d). Combining the PCA and k-means analyses, we found that the noninvasive and invasive tumor images were highly overlapped. Splitting the clusters resulted in less than 0.006 in information gain. These data suggested that non-invasive and invasive bladder cancer images were not separable Fig. 3 Clustering analysis of extracted features from Ta (non-invasive) and T1 (invasive) tumor images. The features were first selected by PCA and then clustered using k-means analysis. Plots were made for the first and second components of PCA output (a), which were clustered with k = 2 (c). Plots were also made for the first and third components of PCA output (b), which were clustered with k = 9 (d) with simple linear transformation. Therefore, supervised learning methods were considered.
Feature reduction and supervised classification of cancer images
To select meaningful ones from the 740 features extracted by ImageJ and CellProfilers, we first manually trimmed questionable features that were related to time (i.e., time when images are processed or taken), index (i.e., labels of images), descriptive string (i.e., initials of processing methods or channels of image processing), or those containing missing values 'N/A' as the results of ImageJ and CellProfilers processing. In addition, the features containing no numeric values were also removed. As a result, 696 features were selected for further analysis.
Given that the training set contained 930 images, 696 features might raise the concern of overfitting. To address this concern, we reduced the number of features by employing decision tree (DT) with k-fold cross-validation to rank the relative importance of the features. Specifically, we first used all 696 features as the input to build 20 forests, each with 40 trees. Similar to RF, each tree was constructed by random samples, but the number of features was fixed to 696. The DT method was used to evaluate the importance of each feature by averaging the importance values of the feature in all trees of a forest. We therefore ranked the relative importance of all 696 features based on their average importance values. This rank determined the order of the features added to ML models. That is, after measuring the impact of the first feature, we iteratively added the next feature in the rank to the models. As shown in Fig. 4a, as the features were added in the ranking order, the performance of 6 ML classifiers including PNN, increased and reached a plateau between 70th and 100th features (Fig. 4b). After adding 200 features, the performance started to drop (Fig. 4a). To examine whether the ranking order of features was critical for the observed tendency, we randomized the feature order and found that a plateau was still reached between 70th and 100th features (Supplementary Figure S4A-B). This result suggests that the DT method successfully selects the most important 100 features from the original 696 features.
To compare further the two feature sets (100 vs. 696) in predicting Ta and T1 bladder cancers, we used 6 ML classifiers, including PNN [23][24][25], RF [26,27], SVM [28], bagging (Adaboost) [29], LR [30], and MLP. Three metrics were used to evaluate the performance of the classifiers, including accuracy, ROC curve, and the AUC. We found that the average accuracy was over 90% for all classifiers (Fig. 5). Moreover, the 100-feature set outperformed the 694-feature set in five out of six classifiers, except LR (Fig. 5). The same trend is observed in ROC and AUC (Fig. 6a-b and Supplementary Figure S5A Figure S5B) and the accuracy of 96.7% (Fig. 5). Overall, our work clearly showed that the top 100 features generally had a higher predictive power than the 696 features.
To examine the performance of deep learning models on our data, we used both pre-trained VGG16 and VGG19 networks to extract features. Specifically, we took the convolutional base of the networks, ran the Ta and T1 cancer images through it, and trained a new classifier on top of the output. We found that the accuracies of VGG16 and VGG19 reached 84 and 81%, respectively Figure S6B). Our results showed that the general ML classifiers outperformed deep learning models, suggesting that, for cancer histopathological images, feature extraction based on domain knowledge performed better than computer-based feature extraction.
Relative importance of three microscopic patterns
To assess the relative importance of the three microscopic patterns in predicting non-invasive versus invasive bladder cancer images, we separated the 696 features into three groups and assessed the performance of the 6 ML classifiers. We found that features extracted from the desmoplastic reaction pattern had the highest overall accuracy of 90.5% with the average AUC values of 0.98 ( Fig. 8a and Supplementary Figure S7A). By contrast, pinker cytoplasm had 74.5% overall accuracy with the average AUC of 0.825 ( Fig. 8b and Supplementary Figure S7B), whereas retraction artifact had 73.4% overall accuracy with the average AUC values of 0.802 ( Fig. 8c and Supplementary Figure S7C). It was noteworthy that desmoplastic reaction had 675 features, whereas pinker cytoplasm and retraction artifact had 13 and 15 features, respectively. These observations suggest that the models with the desmoplastic reaction features may be overfitting. Reducing from 675 features to 70 features in the desmoplastic reaction pattern still outperformed the pinker cytoplasm and retraction artifact patterns with an accuracy of over 90% (data not shown). To some extent, all three patterns could distinguish Ta and T1 tumor images with a reasonable accuracy (> 70%), suggesting that some features extracted from these patterns might be correlated (see Discussion).
To understand which features in the desmoplastic reaction pattern are more important, we ranked all features based on 40 DTs. We found that features, such as the number of nuclei and distributions of nuclei sizes, came out at the very top of our ranking (Supplementary Figure S8). These findings were consistent with the main microscopic characteristics of the desmoplastic reaction pattern, in which a large number of inflammatory cells surround the nests of tumor cells. Our result suggests that the desmoplastic reaction pattern contains the most informative features in distinguishing Ta versus T1 bladder tumors.
Discussion
The goal of this project was to build a ML-empowered, feature-centered, and interpretable diagnostic system to assist pathologists to distinguish histopathological images of non-invasive and invasive bladder cancers. For a given image, the system provided a probability value to Ta or T1 tumors, which can be used as additional evidence to facilitate the doctors' decision-making process.
To this goal, we successfully developed automatic pipelines to extract features in three invasive patterns characteristic to the T1 stage bladder cancer (i.e., desmoplastic reaction, retraction artifact, and abundant pinker cytoplasm), using ImageJ and CellProfiler. Meanwhile, the presence of the muscle layer in the specimens of bladder tumor resection is often crucial for cancer staging. However, we have not taken into account the muscle layer in our analysis because: 1) its presence in a slide provides no help in distinguishing Ta and T1 tumors; and 2) the muscle layer is often absent in biopsy specimens. Our system was therefore designed on the basis of the assumption that the muscle layer is not present in tumor specimens. The fact that the system is able to achieve > 90% predictive accuracy suggests that textural features hidden in the aforementioned three patterns are critical for distinguishing T1 from Ta tumors.
We further investigated the relative importance of the three patterns in the distinction of Ta versus T1 tumors, and found that the desmoplastic reaction pattern is most important. Interestingly, using 15 and 13 features identified from retraction artifact and abundant pinker cytoplasm patterns respectively still achieve > 70% accuracy. A separate analysis with 60 features combining all features extracted from the retraction artifact pattern and the pinker cytoplasm pattern was able to achieve~85% accuracy (data not shown). In our view, this high predictive accuracy may be explained by two possibilities. First, multiple patterns may co-exist in the T1 tumor images. In other words, most of T1 images may have more than one microscopic pattern. Second, different patterns may share common textural features. Identification of these 'basic' features will shed light on the fundamental differences between Ta and T1 tumors. It may help further reduce the feature number, thereby improving the interpretability of this ML-based diagnostic system.
Feature engineering requires domain knowledge/expertise and may take much time to identify features that represent the patterns of interest. Recently, deep learning techniques [31,32] from the computer science field have dramatically improved the ability of computers to recognize objects in images. This raises the possibility for fully automated computer-aided diagnosis in pathology. Among all the ML models in image recognition, CNN is one of the most studied and validated method. Not only it has great performance, but also the design of CNN hidden layers allows the model to extract meaningful features without any prior knowledge. The pathology community has been showing increasing interests in comparing CNN to human judgements. Although applying the deep neural network to recognizing medical image patterns is not a new idea and has shown promising results, its requirement of large quantity of data for training turns out to be a big bottleneck for many unpopular diseases. To address this limitation, we developed CNN models using pre-trained VGG networks and found that it achieves a remarkable accuracy of 84%. Of note, these CNN models are pre-trained on general images that are different from histopathological images, suggesting that their performance could be improved with pre-training on histopathological images.
Notably, features in CNN models are automatically extracted from images without prior knowledge, and some features may be completely novel to pathologists. By assessing the intermediate layers of CNN, we may identify novel features that could be subsequently added to the feature engineering models to improve prediction accuracy. This iterative process will help make our system more powerful and interpretable.
Although the inclusion of pathologists (i.e., human-inthe-loop) in the model development process is very important, there is a need to go beyond interpretable machine learning. To reach a level supporting the pathologists in their daily decision making, another factor that should be taken into account is causability [33], which is measured in terms of effectiveness, efficiency, satisfaction related to causal understanding and its transparency for a user. In other words, it refers to a human understandable model. Since causability encompasses measurements for the quality of explanations, causability enables an expert pathologist to consider the causality of a particular disease. As such, although our system, in some sense, is interpretable, achieving causability is the ultimate goal of our system, which will be not only usable but also useful for pathologists.
Conclusions
With ImageJ [34,35] and CellProfiler [36], nearly 700 numeric features were extracted from three wellcharacterized patterns that distinguish T1 from Ta tumors, including desmoplastic reaction, retraction artifact, and abundant pinker cytoplasm. Clustering analysis with k-means failed to separate Ta and T1 images. To avoid overfitting, we selected only informative feature through feature ranking based on decision-trees with kfold cross-validation. With the top 100 features, we successfully distinguished~1200 Ta and T1 images with an accuracy of 91-96% using six classic ML approaches such as random forest, LR, PNN, bagging (Adaboost), SVM, and MLP. By contrast, a CNN model based on pre-trained VGG networks achieved an accuracy of 84%, suggesting that human-assisted feature extraction could outperform automatic feature extraction. Our analysis suggests that desmoplastic reaction is more important than the other two patterns. Moreover, the number and size distribution of nuclei of tumor cells in the desmoplastic reaction pattern appear to be the most predictive features, which is generally consistent with observations by pathologists. This ML-empowered diagnostic system is highly interpretable and has a potential to apply to other types of cancer. | 2020-07-18T13:25:30.136Z | 2020-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "d45ed3ce0e5a3562ad6707c5ea030fb4fc024013",
"oa_license": "CCBY",
"oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-020-01185-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ee989c13915d7b038b13eebe76c783f8896a352",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
12208059 | pes2o/s2orc | v3-fos-license | Deciphering miRNA transcription factor feed-forward loops to identify drug repurposing candidates for cystic fibrosis
Background Cystic fibrosis (CF) is a fatal genetic disorder caused by mutations in the CF transmembrane conductance regulator (CFTR) gene that primarily affects the lungs and the digestive system, and the current drug treatment is mainly able to alleviate symptoms. To improve disease management for CF, we considered the repurposing of approved drugs and hypothesized that specific microRNA (miRNA) transcription factors (TF) gene networks can be used to generate feed-forward loops (FFLs), thus providing treatment opportunities on the basis of disease specific FFLs. Methods Comprehensive database searches revealed significantly enriched TFs and miRNAs in CF and CFTR gene networks. The target genes were validated using ChIPBase and by employing a consensus approach of diverse algorithms to predict miRNA gene targets. STRING analysis confirmed protein-protein interactions (PPIs) among network partners and motif searches defined composite FFLs. Using information extracted from SM2miR and Pharmaco-miR, an in silico drug repurposing pipeline was established based on the regulation of miRNA/TFs in CF/CFTR networks. Results In human airway epithelium, a total of 15 composite FFLs were constructed based on CFTR specific miRNA/TF gene networks. Importantly, nine of them were confirmed in patient samples and CF epithelial cells lines, and STRING PPI analysis provided evidence that the targets interacted with each other. Functional analysis revealed that ubiquitin-mediated proteolysis and protein processing in the endoplasmic reticulum dominate the composite FFLs, whose major functions are folding, sorting, and degradation. Given that the mutated CFTR gene disrupts the function of the chloride channel, the constructed FFLs address mechanistic aspects of the disease and, among 48 repurposing drug candidates, 26 were confirmed with literature reports and/or existing clinical trials relevant to the treatment of CF patients. Conclusion The construction of FFLs identified promising drug repurposing candidates for CF and the developed strategy may be applied to other diseases as well. Electronic supplementary material The online version of this article (doi:10.1186/s13073-014-0094-2) contains supplementary material, which is available to authorized users.
Background
Cystic fibrosis (CF) is a lethal autosomal recessive disorder that mostly affects Caucasians with approximately 30,000 cases in the United States and about 70,000 cases reported worldwide. It is caused by mutations in the CF transmembrane conductance regulator (CFTR) gene [1] which codes for an ion channel to regulate the balance between the transport of chloride and the movement of water through an epithelial barrier. Mutations in the CFTR results in altered mucus and thickened secretions to promote chronic infection and inflammation [1]. Note that the mutations are grouped into different classes either affecting the quantity or function or a combination of both of the CFTR protein. Although the molecular causes for CF are well understood and >1,000 mutations have been identified, the treatment of CF is complex and mostly relies on the use of antibiotics. Currently, there is no cure for CF and drug treatment can only ease symptoms by influencing mucus production and the restoration of pulmonary surfactant, the prevention of inflammation and infection, and by the combined use of nutritional supplements [2]. Despite some advances in the treatment and management of disease, the median age of survival for CF patients is still only about 40 years [2].
In 2012, the US Food and Drug Administration (FDA) approved Kalydeco (ivacaftor) for its use in CF patients. This drug modulates CFTR activity and fulfilled a promise made more than 20 years ago when a mutated CFTR was first discovered and researchers spoke optimistically about developing drugs to restore the function of the mutated protein [3]. The successful development of Kalydeco is a milestone in the treatment of CF patients; however whether patients will be able to afford the drug is unclear, making its widespread adoption and use questionable. In the UK, regulators only agreed to approve Kalydeco after Vertex Pharmaceuticals reduced its official list price to £182,625 ($297,000) per year per patient [4] and the drug is intended for use in CF patients of the G551D genotype only and must be aged 6 years and above.
Importantly, to address unmet needs in rare and neglected diseases, drug repurposing of approved drugs has been advocated and attracted significant attention from academia, pharmaceutical industry and governmental agencies [5,6] and included the use of statins (e.g., simvastatin) for the treatment of adult CF [7]. Apart from its lipid-lowering effects, statins influence the production of pro-inflammatory cytokines and chemokines. Moreover, statins modulate nitric oxide (NO) production by inhibiting the RhoGTPase pathway, thereby improving NO and inflammatory components in pathogen infected lungs of CF patients [8], as evidenced in clinical studies [9,10].
An identification of drug-repurposing candidates for CF based on a systematic analysis of an entire drug landscape has not been attempted. We therefore explored a computational strategy based on the drug-repurposing principle that integrates diverse data, including data from emerging molecular technologies such as expression of microRNA (miRNA), and transcription factors (TFs) to promote the rational use of market drugs for the treatment of CF.
For this purpose, feed-forward loops (FFL) were constructed and FFLs are defined as regulatory network motifs, whose connectivity patterns occur much more frequently than randomized in 'control' networks [11]. A FFL usually consists of two regulatory elements, one of which controls the other to regulate gene expression together [11]. FFLs have been demonstrated to play important roles in disease development and contributed to an understanding of underlying mechanism [12]. For instance, the two regulatory elements can be defined as two TFs or one TF plus one miRNA. Taylor et al. [13] detected a nuclear factor, erythroid 2-related factor (Nrf2), that regulated a FFL which was involved in the protective response to oxidative stress in a mouse disease model. Hall et al. [12] reported a type I interferon (IFN) FFL in the pathogenesis of autoimmune rheumatic diseases. Guo et al. [14] identified 32 schizophrenia specific FFLs consisting of miRNA, TF, and genes. Afshar et al. [15] explored FFLs entailing miRNAs, TFs, and genes in prostate cancer. These proof of concept studies encourage the development of disease specific FFL that can be applied to the process of drug repurposing. Here we hypothesized the existence of a set of FFLs in CF where the two regulatory elements are defined by specific TFs and miRNAs, respectively.
Notably, miRNAs are 18 to 25 nt long non-coding RNAs that function in the transcriptional and post-transcriptional regulation of gene expression [16]. miRNAs are involved in different biological processes such as differentiation, apoptosis, and stress response [17], and miRNAs can interact with the 3′UTR of target mRNAs via base-pairing to facilitate the recruitment of a ribonucleoprotein complex that either blocks cap-dependent translation or triggers target mRNA deadenylation and degradation [17]. An increasing number of miRNAs have been identified to regulate cancers [18,19], multiple sclerosis [20], diabetes [21], hepatotoxicity [22], and cardiovascular diseases [23]. miRNAs have also been reported to play a crucial post-transcriptional role in CF [24][25][26][27][28]. For example, miR-126 was shown to regulate the inflammatory signaling pathway and was reported to be decreased in CF respiratory epithelium as compared to non-CF bronchial epithelial cells in vivo and in vitro [29]. Likewise, TFs are key regulators in the control of gene expression by translating cis-regulatory codes [30]. Due to their function and regulatory logic [31], miRNA and TFs coregulate the same genes in a complex manner and are therefore suitable elements to construct FFLs.
We therefore hypothesized the existence of a set of FFLs which are composed of both TFs and miRNA to regulate genes in CF and CFTR. Consequently, we constructed CF and CFTR-specific FFLs, and studied the effects of market drugs by inferring perturbations of disease-specific FFLs with the aim to determine their potential utility in the treatment of CF. We focused on approved drugs without boxed warning and are considered to be safe at affordable prices. As a result, we identified market drugs as putative candidates for CF treatment. Strikingly, out of the 48 repurposing drug candidates 26 were confirmed with literature reports and/or existing clinical trials relevant to the treatment of CF patients thus providing evidence for the utility of the employed approach.
CF and CFTR associated gene regulations
Initially, we collected information from diverse public repositories including the Genetic Association Database (GAD) [32,33], Orphanet [34,35], the Online Mendelian Inheritance in Man (OMIM) [36,37], the Function disease ontology annotation (FunDO) [38,39], and PubMed reports (see also Table 1). The broad and diverse information was validated by different experimental platforms. Eventually, a comprehensive list of differentially expressed genes (DEG) was compiled using diverse data sets from cystic fibrosis patients with mild and severe lung disease based on tissue samples obtained from bronchial brushings or nasal epithelium as well as rectal epithelia of CF and non-CF individuals (GEO submission GSE2395, GSE55146, and GSE15568). Collectively, a total of 1,042 DEGs were compiled (Additional file 1: Table S1). To discern CFTRassociated gene regulations from CF-related DEGs the data reported by Ramachandran et al. [25] were considered and included 419 unique genes (Additional file 1: Table S1).
MiRNA networks of CF-regulated genes (miRNA → gene/TF) To identify CF-specific miRNAs, data from different sources were integrated, including a literature search using the keywords 'miRNA' and 'cystic fibrosis' in PubMed. Here, we focused on miRNA expression profiling studies in CF patient samples and considered particularly the findings of Oglesby et al. [40] and Bhattacharyya et al. [41] which had information on 93 and 22 regulated miRNAs, respectively. Furthermore, to be able to distinguish between CF and CFTR miRNAs networks and to identify commonly regulated ones, data obtained from well-differentiated primary human airway epithelial cultures were considered as reported in Ramachandran et al. [25]. There were 112 CFTR associated miRNAs of note (Additional file 2: Table S2).
MiRNA analysis and target prediction was done with the TargetScan algorithm [42] and included the search for the presence of conserved 8mer or 7mer sites that match the seed region of the miRNA. The functional annotation of predicted targets is based on experimental validation [43,44] and in the case of miRNA → gene/TF pairs to be considered conserved in homo sapiens a total context score higher than -0.4 was applied [45]. To confirm miRNA targets in CF and CFTR networks and to distinguish among individual TFs involved, the predicted target genes were mapped onto a human TFs list in the ChIPBase [46].
Transcription factor networks of CF regulated genes (TF → gene/miRNA) The TF and gene/miRNA relationship data were extracted from the ChIPBase [46]. ChIPBase aims to provide high confident information on the transcriptional regulation of long non-coding RNA and miRNA genes from ChIP-Seq data. The data were curated from sources such as the NCBI GEO database [47], ENCODE [48], the modENCODE databases [49,50], and PubMed literature citations. Thus, the TFs related to CF and CFTR gene/miRNA networks were extracted from the human hg19 organism with regulatory regions (upstream: 5 kb; downstream: 1 kb).
CF protein-protein interaction network
The STRING 9.1 version [51] was applied to study proteinprotein-interaction (PPI) using input data derived from [27] and CF patient samples (GEO submission GSE2395, GSE55146, and GSE15568). Initially, a total of 123 CFTRassociated genes were considered and based on 80 genes that are part of the 15 constructed FFLs. A total of 135 PPIs were observed. Furthermore, for nine disease-specific FFLs and the 66 genes associated with it, a total of 97 PPIs were observed. Additionally, for nine out of 15 FFLs the diseasespecific regulation of miRNA was validated by a consensus approach by employing 10 different algorithms, that is, DIANA-microT [52], miRanda [53], miRDB [54], miRWalk [55], RNAhybrid [56], PICTAR4 [57], PICTAR5 [57], PITA [58], RNA22 [59], and TargetScan [60] (see Figure 1). Gene targets were considered positive only when confirmed by at least eight algorithms. Importantly, the STRING analysis provided high confidence PPI interactions based on the neighborhood, gene fusion, co-occurrence, co-expression, experiments, text-mining, and so on. In this study, only interactions with confidence scores higher than 0.4 were extracted.
Enrichment of significantly regulated miRNA and TF in CF gene networks
To assess the statistical significance for genes that were co-regulated by both miRNA and TF, the cumulative hypergeometric test was employed based on the common CF and CFTR-specific genes of any pair of miRNA and TF as described by the following formula [45]: where N miR denotes the number of target genes for a given miRNA, N TF represents the number of target genes for the corresponding TF, and Total is the number of common genes between all the CF-and CFTR-related genes regulated by TFs and repressed by miRNAs. The Benjamini-Hochberg multiple testing corrections were used to adjust the P values (function mafdr.m from MATLAB 7.10.0 (R2010a)), and only those pairs with justified P values less than 0.05 were considered.
Drug effects on miRNA expression
The effects of drugs on individual miRNAs were compiled from SM2miR [61]. In the present study, only FDA-approved drugs were considered to be potential repurposing candidates for CF. Moreover, the miRNAgene-drug relationship was extracted from Pharmaco-miR [62] that provides miRNA pharmacogenomics data manually curated from literatures.
CF-and CFTR-related gene and miRNA expression changes
An outline of the work flow is given in Figure 1, and a summary of CF-and CFTR-related gene and miRNA data are given in Table 1. Initially, a comprehensive list of differentially expressed genes (DEG) was compiled using diverse data sets from CF patients in addition to literature findings regarding CFTR-associated gene networks. Subsequently, common regulations of DEGs by TFs and miRNAs were investigated by means of databases searches in addition to experimental data retrieved from literature searches. For this purpose, the publically available GEO data sets GSE2395, GSE55146, and GSE15568 were analyzed. The data informed on whole genome gene expression profiling in cystic fibrosis patients with mild and severe lung disease using either tissue samples obtained from bronchial brushings or nasal epithelium as well as rectal epithelia of CF and non-CF individuals. In all 1,042 DEGs were obtained, however there was little to no overlap among DEGs when individual studies were compared (see Figure 2A).
Furthermore, to discriminate CF-specific and CFTRrelated miRNA networks profiling data obtained from CF patient airway epithelium and CF related cell lines as well as primary human airway epithelium were considered using the findings reported by Oglesby et al. [40] and Bhattacharyya et al. [41]. As denoted for the whole genome gene expression profiling studies major discrepancies among the reported miRNA profiling studies were observed with little overlap in identified miRNAs using either bronchial brushings from CF patients or CF bronchial epithelial cell lines (see Figure 2B). In regards to the CFTR associated miRNA regulations the data reported by Ramachandran et al. [25] were used and yielded 112 differential expressed miRNAs. As depicted in Figure 2C, 31 down-and 12 upregulated miRNAs were in common when the findings of Oglesby et al. and CFTR-associated miRNAs were compared as determined in human airway epithelium. Likewise, two down-and 10 upregulated miR-NAs were commonly regulated when the data reported by Bhattacharyya et al. and findings from CFTR-associated miRNAs were compared ( Figure 2D). Taken collectively, a total of 93, 22, and 112 uniquely regulated miRNAs were extracted from experimental data and among the three studies seven miRNAs were in common that permitted an in-depth assessment of the miRNA-CF disease relationship.
Apart from CF-specific gene and miRNA expression changes several of the identified genes are also co-expressed or are involved in the same pathways or biological process as determined for the CFTR-associated gene network using human airway epithelium. This is consistent with our understanding of the pathogenesis of CF with most of the common regulated genes influencing folding, sorting, and degradation of proteins and included the ubiquitin mediated proteolysis, protein processing in the endoplasmic reticulum and the proteasome. It has been established that the ubiquitin-proteasome pathway controls the degradation of CFTR and therefore plays a central role in CF .
To be able to construct FFLs, different types of regulatory relationships were considered, that is, genes regulated by either miRNA (miRNA → gene) or TF (TF → gene), as well as the relationships between miRNA regulating TFs and vice versa (miRNA → TF and TF → miRNA) in addition to the gene-gene interaction as depicted in the work flow diagram ( Figure 1 and Table 2). The findings entrained on the CFTR gene and miRNA networks were validated using data derived from CF patients as detailed in Table 2.
miRNA gene target relationship
Initially, the miRNA targets were predicted using TargetScan (see the Method Section for further details and Table 2). There were a total of 1,615 miRNA → gene pairs, which involved 99 CFTR specific miRNAs (out of 112 miRNAs identified) and 226 CFTR-regulated genes (out of 419 genes identified from Reference [25]). Among them, the miRNAs, hsa-miR-200b, hsa-miR-200c, and hsa-miR-429 regulated the largest number of genes. The average number of targeted genes per miRNA is 16.
It is well known that the miRNAs from the same family share similar regulatory functions and mechanisms [63]. We therefore constructed a miRNA-based network using the CF gene information and investigated whether the relationship between the miRNAs from the same family was preserved as a means to verify the chosen approach. Figure 2 depicts the miRNAs network module where each node is a CF miRNA while an edge denotes the Tanimoto similarity between each of the two miRNAs. It can therefore be demonstrated that the miRNAs from the same family (for example, hsa-let-7a/b/c/e/g) were preserved with higher Tanimoto similarity. Likewise, in the constructed miRNA-gene network NEDD4L was regulated by 33 miRNAs. This gene codes for an E3 ubiquitin protein ligase and knockdown of NEDD4L in lung epithelia causes airway mucus obstruction, goblet cell hyperplasia, inflammation, fibrosis, and even death after 3 weeks of exposure Common genes among three independent CF patient-related whole genome gene expression data sets; (B) common miRNAs identified in bronchial brushings from CF patients or CF bronchial epithelial cell lines; (C) commonality among 112 CFTR-and CF-related miRNAs derived from the study of [40]; (D) commonality among 112 CFTRand CF-related miRNAs derived from the study [41].
in an animal disease model [64]. Such experimental data support the relevance of the constructed miRNA-gene network.
Using ChIPBase, a total of 422 miRNA → TF pairs were identified and consisted of 89 CFTR-specific miRNAs and 52 human TFs. Among the 422 miRNA → TF pairs, hsa-miR-27a was involved in the regulation of 14 TFs. Meanwhile, the genes BCL11A, SMAD2/3, and SMAD4 were regulated by the largest number (n =30) of miRNAs. Note, reduced SMAD3 protein expression and altered TGFβ1mediated signaling in CF epithelial cells were reported [65].
TF-miRNA/gene regulatory networks TF → miRNA circuitries were constructed using information retrieved from ChIPBase [46]. A total of 3,295 TF → miRNA combinations were computed and this involved 114 and 102 unique TFs and miRNAs, respectively (see Additional file 3: Table S3). For instance, hsa-miR-106b, hsa-miR-25, and hsa-miR-93 were regulated by 72 TFs. Similarly, a total of 16,860 TF-gene pairs were computed and involved 105 TFs and 387 gene targets (see Additional file 3: Table S3). Of the 99 TFs, c-Myc targeted the largest number of CFTR-related genes. It was earlier demonstrated that proteolysis of c-Myc in vivo is mediated by the ubiquitin-proteasome pathway [66]. Among the 387 CFTR-related genes, the gene regulated by the largest number of TFs was UBE2D3 (ubiquitin-conjugating enzyme E2 D3). We further searched for common genes among CFTR and 1,042 DEGs and found 38 genes to be mutual.
CFTR-specific feed-forward loops (FFLs)
It had been demonstrated that composite FFLs (that is, the combined miRNA and TF participating in the regulation of target genes) are more effective in unveiling disease mechanisms than single one as denoted by TF → miRNA or miRNA → TF considerations [45]. As shown in the third step of Figure 1 and as summarized in Table 2, FFLs were evaluated for their significance using a hypergeometric test with multiple testing corrections. Such analysis revealed 449 unique CFTR-entrained FFLs including 41 miRNA-FFLs, 393 TF-FFLs, and 15 composite-FFLs, as shown in Additional file 3: Table S3. The results indicated that the constructed composite-FFLs were of largest relevance followed by miRNA-FFLs and TF-FFLs. Therefore, and based on statistical significance the 15 composite FFLs were employed to search for repurposing candidates for the treatment of CF (Additional file 4: Figure S1). These FFLs contained 12 miRNAs, 11 TFs, and 104 CFTR-related genes, respectively.
We further considered the results of the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis for the commonly targeted genes of the 15 composite-FFLs. Some composites FFLs, such as hsa-miR-192↔CTCF and hsa-miR-191↔TCF7L2, just have one gene in common; thus no enriched pathways were obtained. As depicted in Additional file 5: Figure S2, 24 different pathways belonging to 13 different functional categories were considered. Among them, two pathways, ubiquitin-mediated proteolysis and protein processing in the endoplasmic reticulum, dominated the composite FFLs, whose major function is folding, sorting, and degradation and these are key mechanism in CF [67]. Other FFLs are involved in insulin and TGF-beta signaling pathways and endocytosis. For instance, CF-related diabetes (CFRD) is a common complication of CF and insulin resistance may also affect lung function [68]. Likewise, transforming growth factor-beta (TGF-beta) plays a central role in fibrosis, contributing to the influx and activation of inflammatory cells, the epithelial to mesenchymal transdifferentiation (EMT) of cells, and the activation of fibroblasts and modulation of extracellular matrix production [69]. Downregulation of CFTR by TGF-beta limits epithelial chloride secretion, which causes mucus block [70]. It was also reported that CF is associated with a defect in apical receptor-mediated endocytosis [71].
Validation of FFLs in CF patient samples
To determine disease relevance of the FFLs and to study protein-protein-interactions (PPI) among members of the composite FFLs the following data were considered: (1) whole genome gene expression data; and (2) miRNA profiling studies using samples obtained from bronchial brushings or nasal epithelium as well as rectal epithelia of CF patients with mild and severe disease and non-CF individuals.
Initially, a total of 123 CFTR genes were retrieved from the study of Ramachandran et al. and for 80 genes a total of 135 PPIs were observed in STRING network analysis. This demonstrates that the network partners actually interact with each other. Moreover, for seven and nine disease-regulated miRNAs and TFs, respectively, a total of 97 PPIs among 66 regulated genes were observed further evidencing interactions among the predicted targets (see Figure 3).
Subsequently, we considered disease regulated miRNA and its directionality based on CF patient samples and therefore analyzed the data of Oglesby et al. [40] and Bhattacharyya et al. [41] with respect to the composite FFLs. This revealed a total of seven miRNAs (hsa-miR-26b, hsa-miR-29c, hsa-miR-135b, hsa-miR-155, hsa-miR-192, hsa-miR-200c, and hsa-miR-340) and nine FFLs to be CF associated. Note, in the case of miR-155 three different TFs are involved, that is, SP1, NFKB1, and EBF1, therefore giving rise to three distinct diseaserelevant FFLs. We considered miRNAs whose expression was either increased or decreased in CF patient samples (see Figure 4). In order to predict targets of disease associated FFLs we employed a consensus approach using 10 different algorithms (see Additional file 6: Figure S3). The predicted gene targets were considered positive only when confirmed by at least eight different algorithms. Apart from disease specific miRNAs that were used to construct FFLs the regulation of target genes was also considered in CF patient samples. As described above we compiled a total of 1,042 DEGs derived from GEO submissions GSE2395, GSE55146, and GSE15568, and observed DEGs to be commonly regulated in CF samples and disease-specific FFLs, once again providing evidence for the clinical relevance (see Figure 5).
Repurposing candidates for the treatment of CF
Drug repositioning is a process of identifying alternative indications for existing drugs with acceptable safety at affordable price. To identify drugs with potential use in Figure 3 Protein-protein interaction networks of CFTR-related genes. A total of 419 genes were retrieved from the study of Ramachandran et al. of which 123 could be mapped to the STRING database version 9.1. Only PPI interaction for homo sapiens were considered and a confidence score >0.4 was requested. Among them we considered those genes linked to the 15 composite FFL that were constructed. This revealed 80 genes and a total of 135 PPIs to be in common. Subsequently, for nine disease-specific composite FFLs 66 genes and 97 PPIs were observed. the treatment of CF patients, we exploited small molecules that affect the expression of miRNA which are part of the composite FFLs. We retrieved data from two databases (that is, SM2miR and Pharmaco-miR) as described in Figure 1. Notably, the SM2miR compiles a list of small molecules that interact with miRNAs from the literatures while Pharmaco-miR provides the drug-miRNA association based on the PharmGKB data [72]. We then compared the marketed drug list from DrugBank (version 3.0, [73]) with those identified by SM2miR and Pharmaco-miR as having an ability to influence the expression of miRNA which are part of the FFLs. This process identified 48 unique drugs being strongly designated as repurposing candidates for the treatment of CF patients.
To assess the validity of the CF repurposing candidates, we conducted a two-step analysis. First, we queried clinicaltrials.gov (www.clinicaltrials.gov) that archives clinical studies of human subjects conducted around the world. Collectively, Table 3 compiles all 48 repurposing candidates for CF along with their original indications and literature/clinical trial data. Note that eight out of 48 drugs were already investigated for the treatment of CF patients. For the remaining drug candidates we additionally queried PubMed using the keyword ('drug name' (and) 'cystic fibrosis') followed by reading. Here, 18 out of 43 repurposing candidates have literature citations to support their potential use in CF. Collectively, we found 54.2% of the candidates (26 drugs out of 48 repurposing candidates) to have at least one published study or clinical trial related to CF. Additional file 7: Table S4 lists the information of all 48 repurposing candidates related to drug safety and affordability that were obtained from the FDA-approved drug product labels and the DrugBank V3.0 database.
We further assessed the therapeutic indications of the repurposing candidates and found two categories, that is, Alimentary tract and metabolism (P <0.0016) and miRNA up regulated in CF patients miRNA down regulated in CF patients Figure 4 Composite FFLs of CF-regulated miRNAs. The nodes are marked as green diamonds whereas blue rectangles and gray ellipses denote transcription factors (TF), miRNAs and genes, respectively. Genes color-coded in red are among the 1,042 CF gene expression data retrieved from three independent CF gene expression data sets. The edges are t-shapes, circle-shapes, and gray solid lines, which denotes miRNAs regulating genes/TF and TFs regulating genes/miRNAs, and gene-gene interaction, respectively.
Antineoplastic and immunomodulating agents (P <0.0009) to be significantly enriched. For the different therapeutic categories of the 48 drug repurposing candidates see Figure 6. Note that the two therapeutic categories include some drugs with boxed warning that need to be considered.
Discussions
This study aimed to define suitable drug repurposing candidates for the treatment of CF. For this purpose, FFLs were entrained on CFTR and CF gene networks. Applying FFLs to an entire drug landscape is a complex undertaking and next to safety, affordability was considered. In all 41 miRNA-FFLs, 393 TF-FFLs, and 15 composite FFLs were computed. Using diverse computational strategies gene targets were predicted based on disease-regulated miRNA and involved a consensus approach among different algorithms (see Additional file 6: Figure S3). Validation was achieved with CF patient sample-specific information and FFLs were used to enrich the repurposing drug candidate pipeline by considering small molecules effects on miRNA expression. Eventually, 48 repurposing candidates were obtained; their usefulness was considered based on clinical trial information, literature findings, safety concerns, and affordability points of view.
Based on its ability to influence miR-26b [74] and the transcription factor CREBBP [75], dexamethasone was considered as a repurposing candidate. Dexamethasone is a potent steroid and acts as an anti-inflammatory and immunosuppressant. Its use in CF patients is consistent with the current practice of glucocorticoids in the treatment of lung inflammation [76]. It was reported that low doses of dexamethasone delivered by autologous erythrocytes slows the progression of lung disease in CF patients [77]. As dexamethasone is an approved prescription drug without boxed warning, it provides additional value for its application in CF.
The employed testing strategy also predicted statins as interesting repurposing candidate and it was reported that statins retain ceramide levels normal in CF patients [78,79]. As ceramides and sphingolipids are components of lipid rafts, they play deceive roles in transmembrane signaling [80]. We found simvastatin to be implicated in the hsa-miR-200c↔JUN regulatory FFL. However, some statins are associated with severe adverse drug reactions,
Genes not expressed in CF patients
Genes expressed in CF patients Figure 5 Protein-protein interaction networks of CF-related miRNA. A total of 7 CF regulated miRNAs were used to predict gene targets by employing a total of 10 different algorithms. This defined 263 putative targets which were mapped to the STRING database version 9.1 and revealed 247 PPI among 138 gene targets. Only PPI interaction for homo sapiens were considered and a confidence score >0.4 was requested.
The predicted 138 gene targets were mapped to 1,042 DEGs identified among three independent CF patient-related whole genome gene expression data sets. This identified seven genes in common and a total of 19 PPI. Likewise, phenylimidazothiazoles were reported to activate wild-type and mutant CFTR in transfected cells and thus, have been proposed as drug remedy for CF [83,84]. The present study inferred levamisole to perturb the hsa-mir-26b↔CREBBP and hsa-miR-200c↔JUN FFLs and this drug was used to treat Dukes' stage C Dietary shortage or imbalance Figure 6 The distribution of repurposing candidates for CF at the first level of Anatomical Therapeutic Chemical Classification System (ATC). Each bar was divided by safety concerns including boxed warning, no boxed warning, and nutritional supplementation. The statistical significance of the therapeutic categories associated with CF are A and L based on the Fisher's exact test with a P value cutoff of 0.01. colon cancer and worm infestations. It was demonstrated that levamisole inhibited intestinal Cltransport via basolateral K + channel blockade [85] and this provides a molecular rational for further evaluation.
In another clinical trial (NCT01070446), the effects of choline supplementation in children with CF was investigated. Note, children with CF are reported to have depleted levels of choline [86] and choline is involved in two composite FFLs including hsa-miR-200c↔JUN and has-miR-29c↔TFAP2C as per our investigations.
Furthermore, among the repositioning candidates were thiazolidinediones (TZD), which were initially used to treat type-II diabetes. This class of drugs includes both pioglitazone and rosiglitazone. Some evidence exists that TZD could be used to ameliorate the severity of the CF phenotype [87]. Pioglitazone and rosiglitazone are known to activate peroxisome proliferator-activated receptor gamma (PPARγ) and it has been suggested that a reversible defect in PPARγ signaling in Cftr-deficient cells could improve the severity of the CF phenotype in mice. Additionally, in clinical trial NCT00322868 pioglitazone was evaluated for its ability to decrease inflammation in CF lung patients. While thiazolidinediones are promising, unfortunately this class of drug has been reported to exacerbate congestive heart failure in some patients and thus are labelled with a boxed warning. In addition, troglitazone was withdrawn from the market due to severe liver toxicity [88].
The performed analysis also identified some anti-cancer drugs for potential use in CF patients [89,90] and it was hypothesized that induction of homologous recombination in respiratory epithelium helps to improve the lung function of patients [90]. In our analysis, vorinostat (SAHA), a histone-deacetylase inhibitor and anti-cancer drug used to treat cutaneous T-cell lymphoma, appeared to be a reasonable repurposing candidate. This drug was reported to restore surface channel activity in human primary airway epithelia to a level that was 28% of wild-type CFTR and does not have a boxed warning, but is fairly expensive.
The developed drug repurposing strategy is a modular system and each of the components can be modified or even replaced by other algorithms. Besides TargetScan there are alternatives approaches such as RNAhybrid [56], DIANA-microT [52], RNA22 [59], miRanda [53], PicTar [57], and miRWalk [55] to predict gene targets. Indeed, it has been suggested that combining diverse algorithms could provide more confidence in the inferred drug miRNA-gene relationships [20,91]. To this effect we employed a consensus approach for predicting gene targets of disease-regulated miRNAs by using 10 different algorithms. Lastly, the directions of the disease-regulated miRNAs were considered and the consensus approach revealed that 48% of the predicted targets by TargetScan could be confirmed by at least another algorithm. Moreover, for the TF binding site predictions, we utilized experimental data retrieved from ChIP-Seq experiments. Other technologies such as CHIP on chip array or in silico approaches based on position weight matrices can also be employed to identify putative transcription factor binding sites [92].
The present study focused on repurposing candidates from the miRNA-small molecule perspective while the relationship between TFs and small molecules, that is, drugs affecting transcription factor expression and activity was not considered in detail. So far only a few drugs target TF and would therefore limit the choice of drug repurposing candidates; nonetheless drugs targeting TFs will increase with time [93]. A recent study demonstrated heat shock transcription factor 1 (HSF1) as a potential new therapeutic target in multiple myeloma [94] and other studies revealed that nuclear transcription factor-kappa B (NFκB) could be a potential target for drug development in different disease entities [95][96][97]. We also found NFκB to be involved in several composite-FFLs including has-miR-155↔NFκB and has-miR-340↔NFκB. Note that ibuprofen, a non-steroidal anti-inflammatory drug (NSAID), is one of the antiinflammatory therapies used in the treatment of CF [98,99]. It has been demonstrated that high dose ibuprofen causes modest suppression of NFκB transcriptional activity in CF respiratory epithelial cells. Furthermore, miR-155 promotes inflammation in CF by driving hyperexpression of interleukin-8 [41]. In future studies, the transcription factor drug relationship will be explored in greater detail.
The goal of drug repurposing is to bring new therapies to the market at a lower risk, reduced cost, and less development time when compared to conventional drug development programs [100]. However, the safety assessment in a new disease indication is still an important concern in the regulatory process. While the safety assessment is based on drug label information, the drug repositioning approach may involve different formulations and changes in dosage that need to be considered in different patient populations. For instance, the use of high dose ibuprofen in CF is concerning for its adverse drug reactions, most notably in causing GI hemorrhage, myocardial infarction, drug-induced liver injury, and even renal failure [7]. Additional approaches for safety assessment of market drugs are the U.S. FDA Adverse Event Reporting System (FAERS) [101] and the FDA's Sentinel initiative [102]. Finally, 10% of healthcare expenditure in the U.S. has been attributed to prescribed drugs [103]. Thus, drug affordability will require consideration.
Conclusion
In conclusion, we report a strategy for the rational selection of drug repurposing candidates based on miRNA-TF FFLs. The methodology developed is straightforward | 2017-08-03T02:27:57.049Z | 2014-12-02T00:00:00.000 | {
"year": 2014,
"sha1": "1767c9d734a9ff3440b320ee9bef1fab51383236",
"oa_license": "CCBY",
"oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/s13073-014-0094-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fddf2c44a7a8d23dbd5205d4c95c72cb999e5610",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14014825 | pes2o/s2orc | v3-fos-license | New variant of ElGamal signature scheme
In this paper, a new variant of ElGamal signature scheme is presented and its security analyzed. We also give, for its theoretical interest, a general form of the signature equation.
Introduction
Since the invention of the public key cryptography in the late 1970s [2,13,12], several new subjects related to the data security as identification, authentication, zero-knowledge proof and secret sharing were explored. But among all these issues, and perhaps the most important, is how to build secure digital signature systems. During more than three decades, the topic, probably due to its fundamental and practical role in electronic funds transfer, was intensively investigated [10,15,14,4,1,11,9]. There is only one principle on which rest the digital signature algorithms. To sign a message m, Alice with the help of her private key, must answer a question asked by Bob, the verifier. The question is naturally a function of m. Nobody other than Alice is able to forge her signature and give the right answer, even the asker himself. In most digital signature schemes, the considered question is a difficult mathematical equation depending of m as a parameter. Only Alice, because she possesses a private key, is able to solve it. In this protocol, we are not necessary concerned by the transmitted data security. Indeed, Bob and Alice can publish respectively the equation and the solution in two protected and separated personal servers.
In 1985, ElGamal [3], inspired by the Diffie-Hellman ingenious ideas on new directions in cryptography [2], was one of the firsts to propose a practical signature scheme. Used properly, this signature system has never been broken. He built it on a simple equation with two unknown variables. The hardness of this equation relies on the discrete logarithm problem [7, p.103]. In general, from a public key cryptosystem, one can derive a signature scheme. Curiously, in his paper [3], ElGamal did not exploit this possibility and it is still unclear how he found his signature equation. This fact has encouraged many researchers to look for equations having properties similar to those of ElGamal. See, for instance, [14,4,5]. Some practical signature protocols as Schnorr method [14] and the digital signature algorithm DSA [8] are directly derived from ElGamal scheme. Permanently, ElGamal signature scheme is facing attacks more and more sophisticated. If the system is completely broken, alternative protocols, previously designed, prepared and tested, would be useful. In this work we present a new variant of the ElGamal signature method and analyze its security. Furthermore, we give, just for its theoretical interest, a general form of our signature equation. The paper is organized as follows. In section 2, we review the basic ElGamal signature algorithm and recall the main known attacks. Our new variant and a theoretical generalization are presented in section 3. We conclude in section 4. In the sequel, we will adopt ElGamal paper notations [3]. Z, N are respectively the sets of integers and non-negative integers. For every positive integer n, we denote by Z n the finite ring of modular integers and by Z * n the multiplicative group of its invertible elements. Let a, b, c be three integers. The great common divisor of a and b is denoted by gcd(a, b). We write a ≡ b [c] if c divides the difference a − b, and a = b mod c if a is the remainder in the division of b by c. We start by describing the original ElGamal signature scheme.
ElGamal Original Signature Scheme
We recall in this section the basic ElGamal protocol in three steps, followed by the most known attacks. She computes y = α x mod p. We consider then that : (p, α, y) is Alice public key and x her private key.
2.
Assume that Alice wants to sign the message m < p. She must solve the congruence where r and s are two unknown variables. Alice fixes arbitrary r to be r = α k mod p, where k is chosen randomly and invertible modulo p − 1. She has exactly ϕ(p − 1) possibilities for k, where ϕ est the phi-Euler function [7, p.65]. Equation (1) is then equivalent to : As Alice possesses the secret key x, and as the integer k is invertible modulo 3. Bob can verify the signature by checking that congruence (1) is valid.
Keys generation problem must be taken into account. There exist essentially probabilistic algorithms for generating prime integers. In a recent previous work [6], we obtained experimental results on the subject. Now, we recall the main known attacks.
Main attacks
The first attack was mentioned by ElGamal himself [3]. It is not recommended to sign two different messages with the same secret exponent. As the complete justification of this attack does not figure in the ElGamal paper, we reproduce here the proof from [16, p. 291] which seems to us, less restrictive than that in [7, p.455].
Proposition 2.1. If Alice signs more than one message with the same secret exponent, then her system can be totally broken.
Proof. Let (m 1 , r, s 1 ) and (m 2 , r, s 2 ) be the signatures of the two messages m 1 and m 2 with the same secret exponent k. Due to relation (2), we retrieve Alice secret key x if we find the value of the parameter k provided that r is invertible If we put gcd(s 1 − s 2 , p − 1) = d, there exist two integers S and P such that s 1 − s 2 = d S, p − 1 = d P and gcd(S, P ) = 1. Thus relation (3) becomes : Since k < p − 1 and p − 1 = d P , we deduce that K < d. By equality (4), we can test every value of K and check if r ≡ α k [p]. We find K if d is not too large.
In 1996, Bleichenbacher [1] has discovered an important fact : when some parameters are smooth [16, p.197], it is possible to forge ElGamal signature without solving the discrete logarithm problem. We present here a slightly modified version of his result.
Proposition 2.2. Let (p, α, y) be Alice public key. Suppose that β < p is a positive integer for which one can efficiently compute t ∈ N such that is smooth, then an Alice adversary will be able to forge her signature for any given message M. (1) becomes : Hence s ≡ t (m − β z 0 ) [p − 1], and then the couple (r, s) is a valid signature of the message M, which achieves the proof. Observe that it is not so surprising to choose r = β or r = β i mod p, i ∈ N, implies that β is an other generator of Z * n .
Next section presents our main contribution.
New Variant and Theoretical Generalization
In this section, we suggest a new variant of ElGamal signature scheme based on an equation with three unknown variables. The method does not need the computation of the secret exponent inverse and so avoids the use of the extended Euclidean algorithm. Technical report [4], although it collected several signature equations, did not study the case we propose here.
Our protocol
We suppose first that h is a public secure hash function. We can take h equal to the secure hash algorithm SHA1 [7, Chap.9] and [16, Chap.5]. 1. Alice begins by choosing her public key (p, α, y), where p is a large prime integer, α is a primitive element of the finite multiplicative group Z * p and y = α x mod p. Element x, which is a random integer in {1, 2, 3, . . . , p − 1}, is Alice private key.
2.
Assume that Alice wants to sign the message M < p. She must solve the congruence where r, s and t are three unknown variables and m = h(M) mod p. Alice fixes arbitrary r to be r = α k mod p, and s to be s = α l mod p, where k, l are chosen randomly in {1, 2, . . . , p − 1}. Equation (5) is then equivalent to : As Alice detains the secret key x and knows the values of r, s, k, l, m, she is able to compute the third unknown variable t.
3. Bob can verify the signature by checking that congruence (5) holds.
Our scheme has the advantage that it does not need the use of the extended Euclidean algorithm for computing k −1 modulo p − 1. May be this can be an answer to problems evoked in [9, subsection 1.3].
To illustrate the technique, we give the following small example. . Notice here that k and l are even integers unlike in ElGamal protocol where the exponent k is always odd since it must be relatively prime with p − 1.
Security analysis
Suppose that Oscar is an Alice adversary. Let us discuss some possible and realistic attacks.
Attack 1 : Knowing all signature parameters for a particular message M, Oscar tries to find Alice secret key x.
Therefore, Oscar is confronted to the hard discrete logarithm problem. If Oscar prefers to work with relation (6), he needs to know k and l. Their computation conducts to the discrete logarithm problem.
Attack 2 : Oscar tries to forge Alice signature for a message M, by first, fixing arbitrary two unknown variables and looking for the third parameter.
(1) Suppose for example that Oscar has fixed r, s, and tries to solve equation (5) in the variable t. But here again, he will be confronted to the discrete logarithm problem.
(2) Assume that Oscar has fixed r and t. We have from relation (5): r s s m ≡ α t y −r [p]; and there is no known way to solve this equation.
(3) Assume now that Oscar has fixed s and t. We have from relation (5) : y r r s ≡ α t s −m [p]; and this equation is similar to the last case, so it is intractable.
Attack 3 : Let us admit that Oscar has collected n valid signatures for messages M i , i ∈ {1, 2, 3, . . . , n} and n ∈ N. He will obtain a system of n modular equations : Where ∀i ∈ {1, 2, 3, . . . , n}, Since system (S) contains 2n+1 unknown variables x, r i , s i , i ∈ {1, 2, 3, . . . , n}, Oscar can find several valid solutions. However, as x is Alice secret key, it has a unique possibility and therefore Oscar will never be sure what value of x is the correct one. Consequently, this attack is to be rejected. Next result is similar to that exists in ElGamal scheme.
Proposition 3.2. If no hash function is used, then Oscar can forge existentially Alice signature.
Proof. Assume that Alice products the parameters (r, s, t) as a signature for the message M. So α t ≡ y r r s s m [p]. Let k, k ′ , l, l ′ ∈ N be four arbitrary integers with gcd(l ′ , p−1) = 1. If Oscar chooses r ≡ α k y k ′ [p] and s ≡ α l y l ′ [p], he would obtain : Oscar computes m from equality (7.2) : m ≡ r + k ′ s l ′ [p − 1]; and from (7.1) . Thus (r, st) is a valid signature for the message m. Remark 3.3. Alice can sign two messages with the same couple of secret exponents. Indeed, let (r, s, t 1 ) and (r, s, t 2 ) be the signatures of the two different messages M 1 and M 2 associated to the secret exponents (k, l). We have We can follow the method used in the proof of Proposition 1 and find the value of l, but it seems that it is not an easy task to retrieve secret parameters k and x.
Complexity of our method :
As in [5], let T exp , T mult , T h , be respectively the time to perform a modular exponentiation, a modular multiplication and hash function computation of a message M. We ignore the time required for modular additions, substractions, comparisons and make the conversion T exp = 240 T mult . The signer Alice needs to perform two modular exponentiations, three modular multiplications and one hash function computation. So the global required time is : The verifier Bob needs to perform four modular exponentiations, two modular multiplications and one hash function computation. So the global required time is : T 2 = 4 T exp + 2 T mut + T h = 962 T mult + T h . The cost of communication, without M, is 6 |p|, since to sign, Alice transmits (p, α, y) and (r, s, t). |p| denotes the bit-length of the integer p.
Observe that the complexity of our method is not too high relatively to that of ElGamal scheme or to that in [5].
Theoretical generalization
Let h be a public secure hash function.
1.
Alice begins by choosing her public key (p, α, y), where p is a large prime integer, α is a primitive element of the finite multiplicative group Z * p and y = α x , x is a random integer in {1, 2, 3, . . . , p − 1}. x is the Alice private key.
As Alice detains the secret key x and knows the values r i , k j , m, i ∈ {1, 2, . . . , n}, she is able to compute the (n + 1)th unknown variable t.
Conclusion
In this work, we described a new variant of ElGamal signature scheme and analyzed its security. Our method relies on an ElGamal similar equation with three unknown variables and it avoids the use of the extended Euclidean algorithm. We also gave a generalization for its theoretical interest. For the future, one may try to see how to improve our new variant. One idea is to replace the modular group Z * p by a subgroup whose order is a prime divisor of p − 1 or by other remarkable structures as the elliptic curves group. | 2013-01-15T00:32:55.000Z | 2013-01-15T00:00:00.000 | {
"year": 2013,
"sha1": "94607d87d26778b73c623b144dcbcb4edbe8250f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "76c48350d3dbd91dd843cb2c5cbb118ccc9e2fcc",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
269151570 | pes2o/s2orc | v3-fos-license | The Comparison of ChatGPT 3.5, Microsoft Bing, and Google Gemini for Diagnosing Cases of Neuro-Ophthalmology
Objective: We aim to compare the capabilities of ChatGPT 3.5, Microsoft Bing, and Google Gemini in handling neuro-ophthalmological case scenarios. Methods: Ten randomly chosen neuro-ophthalmological cases from a publicly accessible database were used to test the accuracy and suitability of all three models, and the case details were followed by the following query: "What is the most probable diagnosis?" Results: On the basis of the accuracy of diagnosis, all three chat boxes (ChatGPT 3.5, Microsoft Bing, and Google Gemini) gave the correct diagnosis in four (40%) out of 10 cases, whereas in terms of suitability, ChatGPT 3.5, Microsoft Bing, and Google Gemini gave six (60%), five (50%), and five (50%) out of 10 case scenarios, respectively. Conclusion: ChatGPT 3.5 performs better than the other two when it comes to handling neuro-ophthalmological case difficulties. These results highlight the potential benefits of developing artificial intelligence (AI) models for improving medical education and ocular diagnostics.
Introduction
The use of deep learning (DL) and artificial intelligence (AI) in medicine, especially ophthalmology, has advanced significantly since 2015 [1].Recently, there has been an increase in the use of natural language processing (NLP) in ophthalmology, which involves using AI to understand and converse with human language [2].
The publication of massive DL models known as foundation models has drawn a lot of media attention to NLP in recent months [3].A subfield of artificial intelligence called "generative AI" is concerned with using a massive amount of current data to create new, original content such as writing, images, or audio.NLP advances have made chatbots a potentially useful tool in the healthcare industry.Recently, there has been interest in assessing large language models' (LLMs) capacity to comprehend and produce natural language in the medical field [4].
LLMs have advanced to provide responses that are getting closer to those of humans with the use of a selfsupervised learning process and extensive textual data training [5].Because clinical reasoning frequently takes years to perfect through training and practical experience, the medical domain might present a substantial obstacle for LLMs.AI models can be very useful in the field of ophthalmology, addressing concerns about care that are unique to each patient, offering prompt explanations of pertinent standards, and encouraging dialogues about eye ailments, procedures, and therapies.Though AI models such as ChatGPT have proven successful in the general domains of law, business, and medicine, they have been demonstrated to have inconsistent accuracy when responding to questions on specific medical specialties [6].
Online health information searches are becoming more common.A survey conducted in the United States found that about two out of every three people look up health information online, and one out of every three adults uses search engines to self-diagnose [7].
A vast range of web content is used in a self-supervised manner to train LLMs such as ChatGPT [8].Even if there is a large amount of training data available on the internet, the quality of the material varies.This is especially troubling because LLMs are unable to assess the validity or consistency of the training data [9].Furthermore, LLMs may not have domain-specific knowledge, which leaves them open to producing plausible but possibly false answers.Even with the quick development of LLMs, a more in-depth analysis of their performance in particular medical fields is still necessary.
Specialization in neuro-ophthalmology addresses neurological issues pertaining to the eye.This specialized field primarily studies problems that impact the eye's motions, visual pathways, and visual processing.In order to diagnose disorders that can potentially be life-threatening or vision-threatening, a comprehensive history, examination, neuroimaging, and laboratory test are all necessary in the intellectually taxing discipline of neuro-ophthalmology.
There have been some recent investigations into how well LLMs perform in the field of ophthalmology.An encouraging result of about 40%-50% was reported by Antaki et al. [10] and Mihalache et al. [11] in their evaluations of ChatGPT 3.5's performance on Ophthalmic Knowledge Assessment Program (OKAP) assessment questions.According to both authors, ChatGPT 3.5 performed worse on ophthalmology subspecialty questions than on general questions.
Most research has concentrated on the application of ChatGPT in the ophthalmology domain.There is still a need for a thorough assessment of Google Gemini and Microsoft Bing's diagnostic capacities for neuroophthalmological cases.
There has not been enough research done on the effectiveness and dependability of AI in the subspecialty of neuro-ophthalmology question answering.By comparing the various AI models' output to in-depth case descriptions of different neuro-ophthalmic diseases, we hope to learn more about the various AI models' capabilities and limits.In this study, we sought to assess and compare the performance of three freely available LLMs in answering questions for diagnosing neuro-ophthalmological cases: OpenAI's ChatGPT 3.5, Microsoft Bing, and Google Gemini.Our cases are derived from the neuro-ophthalmology subspecialty taken from "Neuro-Ophthalmology 2023: When Should I Worry?Concerning Signs, Symptoms, and Findings in Neuro-Ophthalmology," which is available online [12].
Our research may shed important light on the advantages and disadvantages of employing LLM chatbots to get answers in the subspeciality of neuro-ophthalmology.
Materials And Methods
For the purpose of the study, we utilized cases from the publicly accessible database of "Neuro-Ophthalmology 2023: When Should I Worry?Concerning Signs, Symptoms, and Findings in Neuro-Ophthalmology."We selected 10 case presentations with various neuro-ophthalmic diseases, including "ethambutol optic neuropathy," "optic neuritis in the setting of myelin oligodendrocyte glycoprotein (MOG) antibody−associated disease (MOGAD)," "sixth nerve palsy secondary to immunoglobulin G4 (IgG4)-related disease," "superior optic disc hypoplasia," "arteritic anterior ischemia on due to (d/t) giant cell arteritis," "pseudotumor cerebri," "trochlear nerve palsy," "vitreomacular traction," "amiodarone-associated toxic optic neuropathy," and "idiopathic orbital inflammatory syndrome." Ten cases with confirmed diagnoses were randomly chosen.Each case was described with details about the patient's demographics, history, chief complaint, any pertinent ocular or medical histories, ophthalmic examination findings, and ocular imaging findings (when necessary).Table 1 displays the 10 case descriptions that were input into each of the three artificial intelligence systems.
Case Description Diagnosis
1.A 32-year-old male (he/him) presented for progressive blurring of his vision. of presentation.Vision in the right eye remained at baseline.He reported feeling well.However, on direct questioning, he reported mild fatigue, scalp tenderness, and right arm weakness for the last month.Two weeks prior, he noted that he had to take breaks while eating due to (d/t) jaw pain.He denied snoring but had never been evaluated for sleep apnea.His past medical history was notable for well-controlled hypertension, hyperlipidemia, and hypothyroidism, for which he was on stable doses of medications.He was a nonsmoker and drank wine occasionally.The ocular history was significant for early-stage primary open-angle glaucoma (on latanoprost {Xalatan}), and he had a normal eye examination a month prior.On examination at the urgent visit, the BCVA was 20/20 OD and light perception OS, with loss of color vision OS and the presence of a left afferent pupillary defect.Visual fields to confrontation were full OD and completely depressed OS.The IOP was 11 OU.He was pseudophakic in both eyes, and the posterior segment evaluation showed a normal right optic nerve, with cup-to-disc ratio of 0.45.The left optic nerve was pale and swollen, with a peripapillary cotton wool spot.The retinal vessels and peripheries were normal OU.Humphrey 24-2 SITA Standard visual field was normal in the right eye.CT of the temporal bones also demonstrated an occlusive thrombus of the right transverse and sigmoid sinuses and jugular bulb.The patient was admitted to the hospital for the treatment of presumed infectious mastoiditis and received intravenous (IV) cefepime, vancomycin, and ampicillin/sulbactam.The following day, he was taken to the operating room by otolaryngology for a right mastoidectomy.Intraoperatively, no purulent material was found.The entire mastoid was filled with a soft tissue mass with a flesh coloration and consistency, which was removed and sent for culture and pathology.Antibiotics were discontinued two days later with no growth on culture (including testing for acid-fast bacilli).Pathology results from the biopsied mastoid tissue showed storiform fibrosis and a lymphoplasmacytic infiltration with an immunoglobulin G4 (IgG4) to IgG ratio of 48%.There was no evidence of lymphoma or plasma cell neoplasm by flow cytometry or immunohistochemistry. Serum levels of IgG4 subclass were elevated to 99.0 mg/dL (reference range: 4.0-86.0mg/dL).He was discharged on a tapering dose of oral prednisone, and therapy with rituximab was initiated two weeks later.Six weeks after initial presentation, his ocular motility and alignment had returned to normal, and his double vision resolved completely.
5.
A 20-year-old female college student was referred to the neuro-ophthalmology clinic for the incidental discovery of bilateral optic nerve pallor on a routine eye examination.Three years prior to her presentation, she saw an optometrist for contact lens evaluation and was found to have mild pallor of both optic discs.Her vision was normal.She was referred to an outside hospital, where she underwent CT and MRI of the brain that were reportedly normal.One month prior to presentation, she saw an outside ophthalmologist for a routine eye examination, was noted to have visual field defects in both eyes, and was referred to the neuroophthalmology clinic.Her past ocular and medical history were unremarkable.She occasionally drinks alcohol, does not smoke, and does not take any medications.On examination, BCVA was 20/20 in each eye, and she had normal color vision and counted fingers in all quadrants with each eye.The pupils were equal, round, and reactive to light without relative afferent pupillary defect.IOP was normal in both eyes.External examination and anterior segment examination were normal.Dilated funduscopic examination showed mild temporal pallor of both optic discs, more prominent on the left.Humphrey visual field testing showed nonspecific peripheral defects infratemporally in the right eye and inferior arcuate visual field defect in the left eye.visual acuity was 20/20 OU, with normal color and normally reactive pupils without an afferent pupillary defect.Visual fields by confrontation were full.He appeared to have a right head tilt.Ocular motility testing revealed full ocular ductions and versions.On prism alternate cover test, he had a 5Δ left hypertropia (LHT) in the primary gaze, which increased in the right gaze (8Δ LHT) and down gaze (6Δ LHT) and on left head tilt (6Δ LHT).There was no obvious excyclotorsion.He was able to fuse with 4Δ of base-down (BD) prism over the left eye.All three large language AI models named OpenAI ChatGPT 3.5, Microsoft Bing, and Google Gemini received all 10 case descriptions directly as input, along with the following question: "What is the most probable diagnosis?"The public can easily access all three LLMs for free.Each question was asked separately for the ChatGPT 3.5, Microsoft Bing, and Google Gemini artificial intelligence programs, and a comparison was made between the responses and the neuro-ophthalmologist's actual diagnoses in the American Academy of Ophthalmology subspeciality program book.
To prevent any influence from earlier prompts and to eliminate memory bias, we have started a new chat for every prompt.For evaluation, the produced content was collected and organized.Next, we assessed how accurate ChatGPT 3.5, Bing, and Gemini were at diagnosing the problem.The first item was considered the most probable diagnosis because Google Gemini and Microsoft Bing frequently provided many differential diagnoses.
All chat box responses were completed in two separate categories: accuracy (correct or incorrect) and suitability (appropriate or inappropriate).All three chat box comments were graded as "appropriate" or "inappropriate," blind to the real information.A suitable description of the diagnosis differentiation procedure based on the input data in each case scenario was characterized as an "appropriate" response.Each response was further divided into "incorrect" and "correct" categories.Chat boxes that validated the diagnosis as being the same as the real fact were considered a "correct" response.
Since there were no human subjects in the study, it just used publicly available data and therefore did not need ethical approval.
Results
Table 2 demonstrates the provisional diagnosis formulated by ChatGPT 3.5, Microsoft Bing, and Google Gemini for each case.On the basis of the accuracy of diagnosis, all three chat boxes (ChatGPT 3.5, Microsoft Bing, and Google Gemini) gave the correct diagnosis in four (40%) out of 10 case scenarios, whereas in terms of suitability, ChatGPT 3.5, Microsoft Bing, and Google Gemini gave six (60%), five (50%), and five (50%) appropriate responses to 10 case scenarios, respectively.Table 3 represents the details of diagnoses in terms of both suitability and accuracy, which are provided by ChatGPT 3.5, Microsoft Bing, and Google Gemini.Our findings show that the accuracy of all three conversation boxes is comparable.While ChatGPT outperforms Microsoft Bing and Google Gemini in terms of appropriateness or applicability, the results of our research do not align with the findings of earlier research carried out by Raimondi et al. [13] and Ali et al. [14], which demonstrated ChatGPT's superiority over alternative LLM options in examinations related to neurosurgery and ophthalmology, respectively.
Numerous distinctive features, such as ChatGPT's incredibly vast parameter set, the continuous input it gets from users and professionals to improve its training, its advanced reasoning and ability to follow instructions, and the use of more recent training data, are probably responsible for its superior performance in terms of appropriateness [15].It is interesting to note, nevertheless, that in terms of accuracy, all three LLM chatbots were equally capable of providing perceptive responses.
Greater relevance, coherence, and quality in the model's outputs are made possible by its context breadth, which is the number of words it utilizes to build its response.
In our study, the average character count was lower in responses provided by ChatGPT in comparison to Microsoft Bing and Google Gemini.
Students seeking brief explanations may find that ChatGPT answers provide more concise feedback due to the shorter responses.Concise answers may also suggest increased computing efficiency in data processing, which would save time and resources [16].
There are a couple of inherent limitations with LLMs.First, there may be privacy issues raised when patient data is used for processing.Second, LLMs might yield inaccurate results because it was designed for public use rather than providing clinical diagnoses.
Furthermore, even if users repeatedly enter the same data into LLMs, it may still produce distinct results and primary diagnoses.This indicates that LLMs remain unreliable and are unable to offer consumers consistent recommendations and diagnoses [17].
Despite LLMs' incredible abilities in a multitude of domains, we must acknowledge its limitations.
Chatbots powered by artificial intelligence that are specifically programmed and trained to identify eye conditions are warranted.Additionally, it makes sense to deploy chatbots that can proactively request data that end users have not provided.
There are certain limitations to our study.On the basis of 10 case scenarios, we have assessed three LLMs.In order to further validate our results, other researchers might assess them using a larger number of cases.Furthermore, a number of these cases had typical disease presentations that could be definitively diagnosed.Thus, it is possible that this selection of cases does not accurately reflect neuro-ophthalmology practice, with many cases having complicating circumstances and questionable findings.On the other hand, a few examples showed unusual ways that the diagnosis was presented.This is when medical art enters the picture and calls for a human practitioner to carefully consider these aspects.
Conclusions
The results demonstrate the great potential of artificial intelligence-driven chatbots, which can be used as a consultation tool to help family doctors and patients get referral recommendations.For preliminary evaluations in tertiary ophthalmology care, these models can also be helpful.Additionally, students studying neuro-ophthalmology may find AI models to be an additional instructional resource that enhances standard learning environments by offering real-time, data-driven feedback.
The growing importance of AI in medicine makes it important to keep in mind the moral and professional responsibilities that doctors have.Technology needs to be utilized in conjunction with experts, not as a substitute for them.
35-year-old Black female presented with right eye pain and vision loss.Ten days prior to presentation, she noted throbbing right eye pain that progressed in intensity over several days and became worse with eye movements.Two days prior to evaluation, she awoke with dim vision centrally in the right eye, "like a smudge," and presented to an optometrist, where visual acuity was 20/40 OD with noted red desaturation and a right relative afferent pupillary defect (RAPD) but intact visual field to confrontation.Fundus examination revealed right optic disc swelling.Her past medical history was notable for well-controlled hypertension, uterine fibroids, and benign glomus tympanicum tumor that was fully resected.She worked from home in human resources and did not use tobacco or drugs or drink alcohol.Family history was noncontributory.A month prior to the onset of symptoms, she had received the Pfizer COVID-19 booster shot and an influenza vaccine.She denied any sick contacts, animal exposures, or recent travel.She denied a history of headache, prior episodes of vision loss, pain on eye movements, diplopia, vertigo, dysarthria, dysphagia, weakness, numbness, paresthesias, difficulty with balance or walking, or bowel/bladder disturbances.On examination one day after the optometry visit, visual acuity was 20/200 in the right eye, with 1.5/11 Ishihara color plates and right RAPD.Humphrey 24-2 SITA Fast noted diffuse suppression OD and normal OS.Slit lamp examination was notable for normal anterior chamber examination, without cell or flare.Dilated fundus examination revealed a 270-degree right optic disc swelling without disc hemorrhage or pallor.The afferent and efferent examination of the left eye was normal.61-year-old male presented for the evaluation of new-onset horizontal double vision beginning three days prior to presentation.He reported that the double vision worsened when looking to his right.He denied any preceding injury, illness, or associated symptoms, such as numbness or weakness.However, he did note intermittent right-sided tinnitus for one month and mild swelling and tenderness behind his right ear beginning two days prior to presentation.His past medical history was significant for systemic hypertension, hyperlipidemia, and type 2 diabetes.His daily medications included atorvastatin 40 mg, hydrochlorothiazide 12.5 mg, and lisinopril 40 mg.His diabetes was diet-controlled.His past ocular history included a branch retinal vein occlusion in the right eye four years prior and a right sixth nerve palsy two years prior with spontaneous resolution of diplopia after 10 weeks.He did not smoke or consume alcohol.He worked as an automobile mechanic.On examination, BCVA was 20/20 in both eyes.The pupils were equal, round, and reactive to light without a relative afferent pupillary defect.IOP was 12 on the right and 14 on the left.Slit lamp examination was normal other than early nuclear sclerosis.Dilated posterior segment evaluation demonstrated pigment mottling of the superior macula on the right but was otherwise unremarkable.Ocular motility examination revealed a -2 abduction deficit on the right with an esotropia of 25 prism diopters in the right gaze, 10 prism diopters in the primary gaze, and 2 prism diopters in the left gaze.An MRI of the brain with and without contrast showed complete opacification and the enhancement of the right mastoid air cells and a portion of the middle ear cavity.The enhancing tissue in the mastoid air cells was abutting the right sigmoid sinus, likely representing bony erosion or dehiscence.Additionally, there was diffuse enhancement of the right tentorium; pachymeningeal enhancement overlying the right temporal, parietal, and occipital lobes; and leptomeningeal enhancement of the right cerebellum.A al. Cureus 16(4): e58232.DOI 10.7759/cureus.582323 of 9 6.A 26-year-old female presented for the evaluation of transient visual obscurations.She reported constant poor quality of vision in both eyes for one month, with associated dimming and tunneling of her vision that occurred when turning her head in either direction or when quickly changing position.She also had constant, dull headaches, which were worse on lying down, and intermittent pulsatile tinnitus.She did not have diplopia.She had been in her normal state of health until about six months prior, when she developed pulsatile tinnitus and positional dizziness.Neuroimaging at that time demonstrated a large posterior fossa cyst that was associated with significant mass effect on her cerebellum and resultant Chiari malformation.Four months prior, she underwent fenestration of the cyst with neurosurgery.She did well in the immediate postoperative period and was discharged home on postoperative day 2.However, eight days later, she re-presented with wound drainage and underwent wound revision surgery.No cerebrospinal fluid (CSF) leak was identified, but cultures grew Staphylococcus pseudintermedius.Therefore, she was evaluated by the infectious disease team and was started on antibiotic treatment with vancomycin.A PICC line was placed, and she continued IV vancomycin treatment for six weeks.During that time, additional sensitivity analysis was performed, and the sensitivity of the organisms to minocycline was demonstrated.Therefore, her PICC line was removed, and oral treatment with minocycline was initiated, with a plan for at least six months of treatment.MRI completed two months postoperatively showed postoperative changes from the arachnoid cyst fenestration, with a small residual cystic fluid collection.On examination, visual acuity was 20/20 in the right eye and 20/20 in the left eye.There was no relative afferent pupillary defect.Extraocular motility was full.IOP was 23 mmHg in the right eye and 17 mmHg in the left eye.She correctly identified 13/13 Ishihara color plates with the right and left eyes.Anterior segment examination was unremarkable, and fundus examination demonstrated bilateral Frisen grade 4 optic disc swelling.Humphrey visual field 24-2 showed a few nasal missed spots in both eyes, with an enlarged blind spot in the left eye.The OCT of the retinal nerve fiber layer demonstrated thickening in both eyes, with average values of 338 microns in the right eye and 399 in the left eye.Her weight was 127 pounds, with body mass index of 21.
. An 84-year-old male underwent uncomplicated cataract surgery in the right eye (RE) in late April.He then had cataract surgery on his left eye (LE) in early May.He noted difficulty reading at his one-week postoperative visit and was noted to have cystoid macular edema in the LE.He awoke in late June with slightly blurred vision in the LE and was noted to have optic disc edema in the LE at his , facial numbness or paresthesias, jaw claudication, and scalp tenderness, as well as recent constitutional or systemic symptoms.Her past medical history was significant only for well-controlled hypertension and mild hyperlipidemia, for which she was on medications.There was no significant tobacco or alcohol use history.Ophthalmic examination revealed vision of 4/200 OD and 20/20 OS.Ishihara color plate testing showed no control plate OD and 14/14 plates OS.There was a right afferent pupillary defect.IOP was normal.External examination was normal without ptosis or proptosis.Extraocular motility showed mild limitation of ductions in all directions in the right eye and normal motility in the left eye.Alternate cover testing showed a small intermittent esotropia in the primary gaze.The rest of her cranial nerve examination was unremarkable.Slit lamp examination was remarkable only for mild nuclear sclerosis OU.Dilated fundus examination showed mild, diffuse optic disc swelling OD and normal disc OS with normal maculae, vessels, and periphery.Humphrey SITA Standard visual field with size V stimulus showed a dense central scotoma in the right eye and was normal for the left eye using the size III stimulus.OCT showed mild, diffuse retinal nerve fiber layer thickening OD and normal OS.Macular OCT was normal.Ganglion cell complex showed mild focal superior thinning OD and normal OS.
one-month postoperative visit.His past medical history included gout, idiopathic cardiomyopathy, congestive heart failure, aortic regurgitation, aortic root dilatation, atrial fibrillation, hyperlipidemia, hypertension, prostate cancer, peripheral vascular disease, and erectile dysfunction.His medications were allopurinol, alendronate, amiodarone, atorvastatin, doxazosin, fluticasone, furosemide, metoprolol, and rivaroxaban.He denied symptoms of giant cell arteritis.He was a former smoker and drank about 1-2 alcoholic beverages several times a week.He was allergic to gabapentin, and his family history was noncontributory.Examination on July 1 showed that his visual acuities were 20/40 RE and 20/50 LE.He had a subtle left afferent pupillary defect.He identified 11/11 Ishihara plates RE and 10/11 LE.His examination demonstrated corneal verticillate, in both eyes, and his extraocular motility was normal.His cup-to-disc ratio was 0.05 RE and 0.0 LE.There was a normal optic disc RE and moderately severe disc edema LE, with no hemorrhages, lipid, or cotton wool spots.Visual field testing showed an arcuate visual field defect RE and a subtle central scotoma and inferior constriction LE.OCT showed a lamellar macular hole RE and significant macular edema LE, without vitreopapillary or vitreomacular traction in either eye.A 74-year-old male was followed for two years by an ophthalmologist as a glaucoma suspect with asymmetric optic disc cupping (right greater than left).At the initial visit, visual acuities were 20/20 OD and 20/25 OS.IOPs were 17 mmHg OU.Automated static perimetry revealed scattered defects OD and superior and inferior arcuate scotomas OS.Repeat automated perimetry at eight months, 14 months, and 16.5 months later demonstrated possible progression of the visual field defects OS but with the appearance of the optic nerves, visual acuity, and IOP remaining stable.The patient's only visual complaint was a long-standing problem with glare and difficulty with driving at night.He did not particularly notice any problems with the vision in the left eye and denied any systemic or neurological symptoms.His medical history was notable for hypertension, hyperlipidemia, gout, osteoarthritis, nephrolithiasis, and multiple cutaneous basal cell carcinomas.He was taking aspirin, atenolol, lisinopril, atorvastatin, naproxen, and allopurinol.His ocular history was notable for early cataracts OU and a tonic pupil OD dating back to 12 years.He denied any family history of glaucoma or other eye disease.Vitreomacular traction 9.A 62-year-old female presented for the evaluation of visual blurring from the right eye of two-week duration.She stated that her visual blurring got worse over the course of two weeks but since had plateaued.The left eye was unaffected.She denied diplopia, ptosissymptoms were noticed one morning upon awakening two weeks prior to presentation.He described the images as being one on top of the other and slightly angled, and he felt his symptoms were stable since onset.He denied headaches, neck stiffness, associated pain, or blurred vision.The patient was in excellent health, and he had no past medical history of significance.Family and social histories were also unremarkable.Laboratory work ordered by his PCP, whom he saw initially, yielded a positive Lyme titer by western blot.The patient denied any history of tick bites, joint pain, or fevers; however, he did note recent fatigue.He also mentioned that as a child, he was told that he had a "wandering eye."He denied any prior patching or treatment for this.On examination, his Trochlear nerve palsy 2024 Shukla et al.Cureus 16(4): e58232.DOI 10.7759/cureus.582324 of 9
TABLE 3 : Responses from three chat boxes
The LLM chatbots' reaction length to the 10 chosen neuro-ophthalmology inquiries is mentioned in Table4.
TABLE 4 : The LLM chatbots' reaction length to the 10 chosen neuro-ophthalmology inquiries
[10]nces in artificial intelligence have had a big impact on a lot of different areas, including healthcare.Medical practitioners may be able to offer accurate and timely information with the use of conversational AI models.Because ChatGPT improves medical practitioners' whole clinical practice and keeps them informed of new advancements, it is a valuable resource for medical education.Research is still being conducted to determine how accurate ChatGPT is in applying the knowledge of medical specialties and subspecialties, despite the fact that it has advanced significantly since its founding.Clinical knowledge is frequently predicated on current general expert consensus or standards, and treatment and monitoring are frequent components that change more quickly than disease description and pathogenesis.These features are challenging to include in a broad language model based on artificial intelligence since generative AI systems generally rely on pre-existing data for learning and subsequently use it to create new output.Antaki et al. compared two versions of ChatGPT based on questions from Ophtho Questions and the Basic and Clinical Science Course Self-Assessment Program.The percentages of correctly answered questions for the prepared questions on a US board test were 49.2% and 59.4%[10]. | 2024-04-16T15:03:32.678Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "1d759606163d27fb6f1433a35a5dd25569af1334",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/241747/20240414-19109-1vxpxdq.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f4c4222ededd7f86a6bd2f677cfe50b9cdbdc33",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55302494 | pes2o/s2orc | v3-fos-license | Design and Application of Optimization Software for Substation Operation Mode Based on EMS
This paper proposes a kind of optimization software for substation operation mode, which can not only read data on-line from EMS, but also calculate total loss of substations in parallel operation, split operation or individual operation mode. It can also select the most optimized way and feed the conclusion back to EMS to make substations operate in the most optimized way. The software is suitable for optimization of substation in rural power grid.
Introduction
The rural power grid is one of the important parts of electric power system.It generally has the character of small-scale, even non-network [1].Therefore, the emphasis of rural power grid optimization should be laid on substation operation mode.The substation operation mode has a large impact on the loss of rural power grid because of the large variation of load [2].Accordingly when substations operate in the most optimized way, they can obtain obvious economic benefit.
The present Energy Management System (EMS) has not the function which can optimize the operation mode of substations.So we design a kind of optimization software for substation operation mode on the basis of EMS.It can not only select the most optimized substation operation mode according to the on-line data, but also realize the closed loop controlling which feed the conclusion back to EMS.EMS will regulate the operation mode of substation correspondingly to make it operate in the most optimized way.Moreover, the software also has convenient function of statistic, accumulation, inquiry and print.During the trial running, the software has realized obvious economic benefit.
Design of Software
The framework of optimization software for substation operation mode is shown in Figure 1.The flowchart of optimization software for substation operation mode is shown in Figure 2.
The software reads data from two parts of EMS through the private data interface.One comes from Power Application Software (PAS) including the topological relation of all the substation equipments and their essential parameter.Another one comes from Supervisory Control and Data Acquisition (SCADA) system including the telemetry date and remote data of substation.All the data will be input in the optimization module of the software.
First the optimization module will judge the connection of all the substation equipments by their topological relation.Then the optimization module will judge the current substation operation mode which is parallel operation, split operation or individual operation by the remote data of substation.And the line which is running in the secondary side will be judged by the same way.
After that, the optimization module can calculate the total load of every secondary bus by the telemetry date of substation.
Based on these analyses, the optimization module can calculate the total loss of the substation when it operates in different mode, and select the most optimized way according to the result.In addition, it can also judge whether each transformer overloads in some way of operation.If one transformer overloads in a certain way, the mode would not take part in the underneath preferential selection.The calculation method of total loss when transformer operates in the different mode will be introduced in detail later.
Further more, the optimization module has the function of statistic and accumulation which can calculate the total loss of the substation when it operates in the current mode or the most optimized mode every day, every month, every year, and so on.
The optimization module will put all the result into the data storage module of the software.The data display module of the software will realize the function of inquiry and print by reading data from the data storage module.
At the same time, the optimization module will put the most optimized operation mode into the closed loop control module, which can feed the conclusion back to SCADA system through the private data interface.SCADA system will regulate the operation mode of substation correspondingly to make it operate in the most optimized way.
Algorithm
The emphasis of a substation's total loss should be laid on its transformers'.Take, for example, a substation which has two transformers and two buses in the secondary side, the software uses following algorithm.
A transformer's total loss consists mainly of iron loss and copper loss [3][4].Iron loss can be considered as no-load loss because copper loss produced by no-load current can be ignored.In addition, iron loss is decided by the main flux in the iron, and the main flux essentially remains unchanged either no-load or on-load if the applied voltage remains unchanged.Therefore, iron loss can also be considered as no-load loss either no-load or on-load.The calculation method of total loss is shown in equation ( 1) [5][6].
where P 0 is transformer no-load loss, S is transformer apparent power, S N is transformer capacity, P k is transformer short-circuit loss.
Based on these analyses, the calculation method of total loss in the way of transformer 1 individual operation is as equation ( 2) shows.
where P 01 is no-load loss of transformer 1, P 1 is active power of bus 1 in the secondary side, P 2 is active power of bus 2 in the secondary side, Q 1 is reactive power of bus 1 in the secondary side, Q 2 is reactive power of bus 2 in the secondary side, S N1 is capacity of transformer 1, and P k1 is short-circuit loss of transformer 1.
Similarly, the calculation method of total loss in the way of transformer 2 individual operation is as equation (3) shows.
where P 02 is no-load loss of transformer 2, S N2 is capacity of transformer 2, P k2 is short-circuit loss of transformer 2.
Copyright © 2013 SciRes.EPE In the same way, the calculation method of total loss in the way of split operation is as equation ( 4) shows.
The calculation method of total loss in the way of parallel operation can be obtained as follow.The relationship of the two transformer apparent power is shown in equation ( 5) according to the load distribution formula.
where S 1 is apparent power of transformer 1, S 2 is apparent power of transformer 2, u k1 is impedance voltage of transformer 1, u k2 is impedance voltage of transformer 2. The calculation method of the two transformers apparent power can be obtained as shown in equation ( 6) and (7) based on equation (5).
The calculation method of total loss in the way of parallel operation can be obtained as shown in equation (8) considering equations ( 1), ( 6) and (7).
In conclusion, the software can calculate total loss in parallel operation, split operation and individual operation mode by equations ( 2), ( 3), ( 4) and (8).
Example
The software has been put into operation successfully at the Second Power Supply Bureau of Harbin since last March.During this period, the software has characteristic of stable operation, friendly interaction and easy to use.Further more, the total loss average of all the four substations has dropped to 86.79 percent, which proved the software had obvious economic benefit.
The deference of substations' total loss every month between they operate in original mode and in the most optimized way is shown in Table 1.
The percentage of substations' total loss per month has dropped as shown in Table 2.
Conclusions
The optimization software for substation operation mode is suitable for optimization of substation in rural power grid.It can not only read data on-line from EMS, but also calculate total loss of substations when parallel operation, split operation or individual operation, and select the most optimized way, as well as feed the conclusion back to EMS to make substations operate in the most optimized way.The software has obvious economic benefit during the trial operation.
for substation operation mode
Figure 1 .Figure 2 .
Figure 1.Framework of optimization software for substation operation mode. | 2018-12-05T17:38:00.843Z | 2013-06-30T00:00:00.000 | {
"year": 2013,
"sha1": "a2ba895429c2f5c1b838b78b547e56efe6c4921d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4236/epe.2013.54b133",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a2ba895429c2f5c1b838b78b547e56efe6c4921d",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
118516331 | pes2o/s2orc | v3-fos-license | Fast frictionless dynamics as a toolbox for low-dimensional Bose-Einstein condensates
A method is proposed to implement a fast frictionless dynamics in a low-dimensional Bose-Einstein condensate by engineering the time-dependence of the transverse confining potential in a highly anisotropic trap. The method exploits the inversion of the dynamical self-similar scaling law in the radial degrees of freedom. We discuss the application of the method to preserve short-range correlations in time of flight experiments, the implementation of nearly-sudden quenches of non-linear interactions, and its power to assist self-similar dynamics in quasi-one dimensional condensates.
Counterintuitive as they are, implementations of fast frictionless dynamics (FFD) providing a shortcut to adiabaticity in quantum systems have recently been introduced theoretically [1][2][3] and demonstrated in the laboratory [4,5].
In this Letter, FFD is exploited as a tool-box to manipulate and control low dimensional quantum gases. As a primary goal, we propose a method to tune the nonlinearity of the effective low dimensional (1D and 2D) dynamics of an anisotropic Bose-Einstein condensate (BEC) by engineering the time-modulation of the transverse confinement. The possibility of attaining this goal by means of a multi-scale expansion method was recognised by Staliunas et al. for slow periodic modulations [6] and led to the observation of Faraday patterns in cigar-shaped BECs [7].
Our method, not restricted by adiabaticity in the transverse dynamics, provides an alternative way to implement a variety of schemes for the generation, stabilisation and control of solitons, and the study of related non-linear matter-wave phenomena in BEC, where it is often necessary to implement a time-dependent coupling constant [8]. Other situation where it can be applied is in the controlled expansion of BEC clouds [9] as in the simulation of cosmological analogues [10]. Moreover, we shall discuss how FFD can be exploited to implement nearly-sudden quenches of the mean-field non-linear interactions, pre-serve quantum correlations in time-of-flight and induce a self-similar expansion in interacting quasi-1D atomic cloud.
FFD allows to evolve from an initial state to a final one without exciting the system while doing it in a given time τ much smaller than that required for an adiabatic dynamics. FFD relies on the inversion of dynamical scaling laws, which can often be exploited to described harmonically trapped ultracold gases, such as the Calogero-Sutherland model [11], Tonks-Girardeau [12,13], strongly interacting mixtures [14], Lieb-Liniger gases [15], Bose-Einstein condensates (BEC) [2,16,17], including dipolar interactions [18], and more general many-body quantum systems [19]. In a harmonic trap with time-dependent frequency, the self-similar evolution of the single-particle states φ n (n = 0, 1, 2, . . . ) follows from the well-known scaling law, where the scaling factor b = b(t) is the solution of the Ermakov differential equation satisfying the boundary conditions b(0) = 1 andḃ(0) = 0, with E n = ω(0)(n + 1/2), and δ(t) [20]. In particular, the probability density reads |φ n (x, t) The essence of FFD is to exploit the p-1 existence of the self-similar dynamics to force a desired trajectory b(t) and invert Eq. (2) to determine the required modulation of the control parameter ω(t) [1,2].
Effective one-dimensional time-dependent Gross-Pitaevskii equation. -Our aim is to find a 1D effective non-linear Schrödinger equation or Gross-Pitaevskii (GPE) equation for a BEC in a elongated trap in which the transverse confinement is modulated in time, . We will do so exploiting the scaling law in Eq. (1). We start with the 3D time-dependent GPE which governs the dynamics of the order parameter Ψ = Ψ(x, t) with the normalisation condition dx|Ψ(x, t)| 2 = 1, and where g 3D = 4π 2 N a m , N is the number of atoms in the condensate of mass m, and a the s-wave scattering length which determines the healing length ξ = 1/ 8πn|a|. It is convenient to introduce the characteristic length in each direction, a j = /(mω j (0)) for j = x, y, z, in terms of which the density scales as n ∼ N/ j a j . For tight transverse confinement (ω x ∼ ω y ≫ ω z ), the kinetic energy in the transverse direction governs over 2-body collisions. Whenever the dimensionality parameter ǫ 2 = a x a y /ξ 2 ∼ N |a|/a z ≪ 1, the transverse excitations are frozen and a dimensional reduction of the 3D GPE is possible [21][22][23]. The 3D GPE becomes a linear Schrödinger equation for the radial degrees of freedom. We next use the ansatz Ψ(x, t) = Φ 0 (x, y, t)ψ(z, t) and consider the ground state condensate wave function in the transverse direction at t = 0 to be that of an unperturbed 2D harmonic oscillator, Φ 0 (x, y, t = 0). It follows from Eq. (1) that |Φ 0 (x, y, t)| 2 = |Φ 0 (x/b x (t), y/b y (t), t = 0)| 2 /(b x (t)b y (t)). After dimensional reduction, the following effective 1D GPE with a time-dependent non-linearity is derived, where can be removed with a unitary transformation. In the following, we focus our attention of the effective coupling constant defined as satisfying the conditions g 1D (0) = 2 2 N a ma x a y ,ġ 1D (0) = 0,g 1D (0) = 0 The third equality is optional, it follows from imposing the first two and the Ermakov equation with the continuity restriction ω r=x, warrants a smooth modulation of the transverse trapping frequency and avoids abrupt changes at t = 0. Nonetheless, abrupt changes of the control parameter are often exploited in bang-bang control methods [24]. As pointed out by Olshanii, the transverse confinement can severely modify the scattering properties, and in particular, lead to a confinement-induced resonance of the form g 1D → g 1D (1 − Ca/a ⊥ ) −1 , with C = 1.4603 . . . [25,26]. Eq. (5) holds far away from confinement induced resonances, i.e. |a| ≪ {a x , a y }. In addition, lowdimensional BECs generally exhibit phase fluctuations which can lead to deviations from the mean-field [27,28]. Even at zero-temperature, quantum fluctuations suppress the off-diagonal long-range order, in the sense that the reduced one-body density matrix being R the size of the system and n(x) the local density. The quantum suppresion parameter for D = 1 is given by [29]. The applications discussed in the rest of the manuscript, involve the monotonic expansion of the cloud in the transverse direction (ḃ(t > 0) > 0) which diminishes the role of phase fluctuations. It will suffice for us to focus on situations where phase fluctuations are negligible in the initial state so as to prevent the formation of density ripples for t > 0 [30], i.e. Γ 1 ≪ 1. However, should one be interest in increasing the effective interactions to a maximum value g 1D (τ ), the condition for phase fluctuations to be negligible becomes Γ 1 g 1D (τ )/g 1D (0) ≪ 1. We note as well that the scaling function b(t) > 0, so that it is not possible to change the sign of the interactions as with Feschbach or confinement-induced resonances [26]. This technique, is therefore restricted to tune the amplitude of the coupling constant.
Let us now consider an isotropic transverse confinement such that ω ⊥ (t) = ω x (t) = ω y (t) (b = b x = b y ), and assume we are interested in a given time dependence g 1D (t) = g 1D (0)/b(t) 2 . This can be engineered by a trap modulation of the transverse trapping frequency given by where g 1D = g 1D (t). Note that this expression is not positive-definite, and ω 2 ⊥ (t) might involve imaginary frequencies as discussed below. It is straightforward to verify that for ballistic expansion along the transverse degrees of freedom, which leads to a polynomially decaying non- , the required transverse frequency is indeed ω 2 ⊥ (t > 0) = 0. Moreover, the existence of the invariant of motion [1] associated with the Ermakov equation underlying the derivation of Eq. (7) allows to drive the transverse frequency without fulfilling the adiabaticity conditionω ⊥ (t)/ω 2 ⊥ (t) ≪ 1 as long as the decoupling of the transverse degrees of freedom hold. This allows ultimately to engineer fast modulations of the effective low-dimensional coupling constant g 1D (t) even in a p-2 Fast frictionless dynamics in low-dimensional Bose-Einstein condensates time scale comparable to ω −1 ⊥ . Note nonetheless that the excitation of parametric resonances can distort the dynamics [6], as in the experiment [7] were the observation of Faraday waves was reported.
Preserving short-range correlations in time-offlight measurements. -Time of-flight expansions constitute an essential tool to study quantum correlations in ultracold gases. Under the assumption of ballistic dynamics it is possible to relate the asymptotic density profile with the momentum distribution of the initial (trapped) state. The slow power-law decay of the interactions , which follows from suddenly switching off the transverse potential, becomes negligible in the time scale t ≫ ω −1 ⊥ . As a result, mean-field interactions are non-negligible in an early stage of the expansion, and blur short-range correlations in a length scale δx ∼ c/ω ⊥ where c is the speed of sound. To avoid this, one would like to suppress the role of interactions in a faster time-scale τ . To this aim, consider the reverse engineering of the time-modulation of the transverse trapping frequency that lead to which satisfies the first two boundary conditions in Eq. (6). Using Eq. (7) we find that it can be induced by For short decay times τ , this trajectory leads to negative values of ω 2 ⊥ (t) associated with purely imaginary frequencies. The physical implementation of ω 2 ⊥ (t) < 0 requires that the transverse confining potential becomes an expulsive barrier, pushing the atoms away from the longitudinal axis of the cloud. This is a general feature of trajectories given by Eq. (7) in combination with the first two boundary conditions in Eq. (6), and is often required for modulations of the coupling constant g 1D and transverse density in a time scale smaller than ω −1 ⊥ . For the time dependence in Eq.
, ω ⊥ (t) 2 > 0 at early stages and becomes negative only after t * = τ arcsech Expulsive potentials have already been used in the laboratory, for instance in the study of bright solitons [32], and can be generated in a variety of ways as discussed in [1]. Moreover, provided thatg 1D (0) = 0, the implementation of the trajectory generally requires sudden jumps of ω ⊥ (t) as in bang-bang control methods [24].
Inducing FFD in quasi-1D Bose-Einstein condensates. -The existence of scalings laws leading to a self-similar dynamics has motivated FFD proposals for superfast expansions providing a shortcut to adiabaticity [1,2], as recently demonstrated in the laboratory both with ultracold gases [4] and BECs [5]. When such techniques are extended to quasi-1D BEC, the implementation of a self-similar dynamics generally requires to modulate in time both the axial trapping frequency and the coupling constant. As a result, proposals for FFD based on scaling laws become particularly challenging. In the following, it is shown that a modulation of the transverse trapping frequency can be used to tune the axial effective coupling constant without the need to use a Feschbach resonance.
For the 1D GPE with a time-dependent axial harmonic trap it is possible to exploit the scaling law for Ψ = Ψ(z, t) in Eq. (1) for the order parameter, with the chemical potential µ playing the role of the eigen-energy [2]. The condensate wavefunction evolves self-similarly according to only if the axial scaling factor b z , obeys the Ermakov equationb z + ω 2 z (t)b z = ω 2 z (0)/b 3 z (and b z (0) = 1,ḃ z (0) = 0), and the time-dependent non-linearity of the form g 1D (t) = g 1D (0)/b z (t) is implemented. The required timedependence of g 1D (t) could be achieved exploting a Feschbach resonance to tune g 3D . Nonetheless, for a narrow resonance a fast change of the coupling constant -requiring a high control of the external magnetic field-can be experimentally challenging [26]. Alternatively, under isotropic transverse confinement, an axial self-similar dynamics under ω z (t), can be assisted by keeping g 3D constant and changing the transverse trapping frequency along the trajectory p-3 which implements the required time-dependent non-linear coupling. This trajectory induces a frictionless dynamics in the transverse direction (essentially a free harmonic oscilator), which modulates the three dimensional density, and ultimately the effective axial non-linearity in the required way g 1D (t) = g 1D (0)/b z (t) for the axial dynamics to be self-similar. Once the axial dynamics is self-similar, one can engineer b z (t) to drive an axial FFD. As a result, to perform an axial FFD of a quasi-1D BEC in a time τ , one can proceed in the following way: i) choose the desired initial and final states (b z (0) = 1, b z (τ )) and determine b(t) as in [1,2], ii) find the required axial trapping frequency iii) use Eq. (10) to derive the required transverse trapping frequency. Implementing both Eqs. (10) and (11) leads to the self-similar dynamics in Eq. (9) along the designed FFD trajectory b z (t). Figure 1 shows an instance of these trajectories. Note that thanks to the relation between the axial and transverse scaling factors b ⊥ (t) = b z (t) 1 2 , the self-similar dynamics along the z-axis can be assisted by a trajectory ω 2 ⊥ (t) involving only real frequencies, without the need to implement a transverse expelling potential but at exceedingly short expansion times τ or large expansion factors b(τ ).
Nearly sudden quenches. -Another useful tool to probe ultracold gases is the use of a sudden quench of the interactions. This is the case for the applications mentioned above, as well as for the generation of shock waves and solitons, studies of relaxation dynamics and many other examples. Let us consider a finite time quench between and initial g 1D (0) = g i and final value g 1D (τ ) = g f of the coupling constant, which approaches the sudden limit as τ → 0. The situation resembles that of FFD. We require vanishing first and second order derivatives of g 1D (t) both at t = 0 and t = τ to avoid transverse excitations at the end of quench. It is convenient to consider a polynomial ansatz g 1D (t) = 5 l=0 α l t l , whose coefficients {α l } are completely determined by the boundary conditions, leading to the explicit form of the quench g 1D (t) = g i + [3s(2s − 5) + 10]γs 3 (12) where γ = g f − g i and s = t/τ . The computation of the corresponding trajectory ω ⊥ (t) is straightforward using Eqs. (7) and (12) but rather lengthy as to be displayed here. Fig. 2 shows the solution ω ⊥ (t) to implement different type of quenches. The required time to achieve a given ratio σ = g f /g i < 1 under free evolution is τ 0 = ω −1 ⊥ (0) γ −1 − 1, while by implementing ω ⊥ (t) it is possible to sped up the decay by several orders of magnitude with a moderate modulation of the transverse confinement. Note that σ = g f /g i = b ⊥ (τ ) −2 = ω ⊥ (0)/ω ⊥ (τ ), and that the amplitude of the required frequency scales Quasi-2D condensates. -So far we have implicitly focused on a cigar-shape condensate. One can similarly modulate the nonlinearity in a pancake-shaped condensate under strong enough radial confinement such that the dynamics along the most tightly confined direction decouples from that in the BEC plane. Assume an oblate 3D harmonic trap with ω x ≫ ω y = ω z = ω r . Letting Ψ(x, t) = φ 0 (x, t)ψ(y, z, t) and φ 0 (x, t = 0) = exp(−x 2 /(2a 2 x )/(π 1/4 √ a x ), using the scaling law for φ 0 (x, t), upon integration of the transverse coordinate, the effective coupling constant for a 2D cloud undergoing a modulation of the transverse confinement is where g 2D (0) = g 3D /( √ 2πa z ), with g 2D (t = 0) = g 2D (0), g 2D (0) = 0, and as in Eq. (6),g 2D (0) = 0 prevents discontinuous jumps of ω r (t) at t = 0 from happening. A given time dependence of g 2D = g 2D (t) follows from a trajectory which may imply imaginary frequencies associated with an expulsive potential, as it happens in Eq. (7) . In particular, note that suddenly switching off the transverse potential leads to an essentially linear-in-time decay of the interactions For a g 2D (t) as in Eq. (8), decaying in a time scale τ < ω −1 x , according to Eq. (14) it is found that the required time dependent trajectory of the control parameter ω 2 x (t) = ω 2 x (0)sech 4 (t/τ ) − 1/τ 2 < 0 requires an expulsive potential for all t > 0. We close noticing that in a pancake-shaped cloud, quantum fluctuations are negligible whenever Γ 2 = 1 √ π 3 a ax ≪ 1 [29] and that, should one be interested in tuning the interactions to a larger value g 2D (τ ), the condition becomes Γ 2 g 2D (τ )/g 2D (0) ≪ 1.
Discussion and conclusions. -In combination with a spatial dependence of the transverse confining potential, FFD paves the way to control the effective coupling constant both in time and space. Moreover, further applications can be envisaged such as the preparation of atomic Fock states by many-body atom culling methods [33], where starting with a large trapped atomic cloud a controlled increase of the interactions can provide the expelling mechanism for the excess of atoms.
In conclusion, we have presented a scheme to implement a fast frictionless dynamics of a low-dimensional Bose-Einstein condensate, in which spurious excitations are avoided without the need to fulfil adiabaticity constraints. Exploiting the self-similar dynamics in the strongly confined degrees of freedom, we have shown that this can be achieved by engineering the modulation of the transverse confinement of the cloud in an elongated trap. As a result, it is possible to tune the amplitude of non-linear interactions in these systems. We have further applied the method to preserve short-range correlations in timeof-flight, assist shortcuts to adiabatic expansions in quasi 1D interacting BEC, and implement nearly sudden interaction quenches. More generally, we argue that inverting the equations associated with self-similar scaling laws, allows to determine the trajectory of the control parameter for different processes, and constitute a powerful toolbox for the manipulation of ultracold atoms. * * * It is a pleasure to acknowledge discussions with L. Santos, J. G. Muga, A. Ruschhaupt, X. Chen, and M. D. Girardeau. The author further acknowledges financial support by EPSRC and the European Commission (HIP), as well as the hospitality of the MPIPKS. | 2011-11-17T18:13:43.000Z | 2010-10-14T00:00:00.000 | {
"year": 2010,
"sha1": "3b889d331f6aa782e0bef079debee7beac7b6396",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1010.2854",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3b889d331f6aa782e0bef079debee7beac7b6396",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265210172 | pes2o/s2orc | v3-fos-license | Shedding Light on the Role of ERAP1 in Axial Spondyloarthritis
Spondyloarthritis (SpA) is a multifactorial chronic inflammatory disease affecting the axial skeleton (axSpA) and/or peripheral joints (p-SpA) and entheses. The disease's pathogenesis depends on genetic, immunological, mechanical, and environmental factors. Endoplasmic reticulum aminopeptidase 1 (ERAP1) is a multifunctional enzyme that shapes the peptide repertoire presented by major histocompatibility complex (MHC) class I molecules. Genome-wide association studies (GWAS) have identified different single nucleotide polymorphisms (SNPs) in ERAP1 that are associated with several autoimmune diseases, including axSpA. Therefore, a deeper understanding of the ERAP1 role in axSpA could make it a potential therapeutic target for this disease and offer greater insight into its impact on the immune system. Here, we review the biological functions and structure of ERAP1, discuss ERAP1 polymorphisms and their association with axSpA, highlight the interaction between ERAP1 and human leukocyte antigen (HLA)-B27, and review the association between ERAP1 SNPs and axSpA clinical parameters.
Introduction And Background
Axial spondyloarthritis (axSpA) is a chronic autoimmune musculoskeletal disorder primarily affecting the skeletal system.Peripheral manifestations (arthritis, enthesitis, and dactylitis) and extraskeletal manifestations are frequent, with the latter referring to acute anterior uveitis, inflammatory bowel disease, and psoriasis.It ranges from non-radiographic axSpA to radiographic axSpA and is also recognized as ankylosing spondylitis (AS) [1].
Axial spondyloarthritis is an unresolved rheumatic disease that results in bony ankylosis, pain, and functional limitations, primarily affecting the lumbar spine and the sacroiliac and peripheral joints [2].The quality of life of patients with axSpA gradually deteriorates with disease progression.As a result, patients lose their ability to work and care for themselves, burdening society and the patient's families.It is one of the most challenging diseases to treat, with a significant risk of impairment and a high cost of care [3].Chronic inflammation causes pain and spinal ankylosis in axSpA.However, the mechanisms underlying this chronic inflammation remain unclear.Despite years of research on the complexities of axSpA, little progress has been made in identifying the signaling events that lead to disease development [4].
Recent advances in our understanding of axSpA pathogenesis have resulted in a better understanding of risk factors, disease causation, and the development of targeted treatments [5].Genetic studies have significantly improved the understanding of axSpA [6].The pathogenic mechanisms of axSpA include a complex interaction between the genetic background, environmental triggers, and mechanical stress, resulting in the overall initiation of inflammation and autoimmune reactions [7].
Genetic susceptibility to axSpA is highly complex, as demonstrated by several genome-wide association studies (GWAS) [8].Hundreds of genes, primarily immune-related, have been identified to be associated with the axSpA spectrum [9].After the identification of human leukocyte antigen (HLA)-B27 in 1973 as a significant genetic risk factor, its contribution to axSpA evolution became known for the first time [10][11][12].This link was so strong that HLA-B27 was thought to be the sole genetic factor that predisposed individuals to axSpA [13].This particular allele is shared by 85% to 90% of patients with axSpA, even though only 5% of people carrying HLA-B27 in their genetic background will develop axSpA [14].Over time, many efforts have been made to understand the mechanism by which major histocompatibility complex (MHC) class I molecules interfere with axSpA pathogenesis.Therefore, three different theories have been proposed to explain the role of HLA-B27: (1) arthritogenic peptide, (2) HLA misfolding and accumulation, and (3) HLA-B27 homodimers on the cell surface [15].
The involvement of other non-HLA MHC genes, such as MICA, TNF, transporter associated with antigen processing (TAP)1, TAP2, and LMP2, has also been suggested but has not been confirmed because of linkage disequilibrium with HLA-B27 [16].The development of GWAS has recently resulted in the identification of additional non-MHC susceptibility loci for axSpA, two of which, namely endoplasmic reticulum aminopeptidase 1 (ERAP1) and interleukin 23 receptor (IL23R), are particularly interesting because they shed light on the important biological pathways involved in SpA pathogenesis [17].
Genetic variants, as well as epigenetic mechanisms such as DNA methylation, histone modification, and noncoding RNAs, are particularly relevant in explaining SpA pathogenesis.Alterations of histone H3 (H3K27ac and H3K4me1) seem to be correlated with RUNX3 expression and the reduction of CD8+ T cells in the presence of the rs4648889 SNP variant in patients with SpA [20].Among epigenetic mechanisms, microRNAs (miRNAs) are the most intriguing [17].The pathogenesis of axSpA is summarized in Figure 1 [17].
Review Biological functions of ERAP
The human cellular immune system detects damaged and infected cells based on the exposure of peptides produced by proteolytic processing of intracellular and endocytosed proteins, including aberrant and unnecessary proteins, on their surface.The immune response is triggered by the binding of these potentially immunogenic peptides to MHC class I molecules expressed on all nucleated cells and platelets and presented to CD8+ T lymphocytes [21].
Both ERAP1, ERAP2, and insulin-regulated aminopeptidase (IRAP), an enzyme expressed in endosomes with a peptide trimming role analogous to that of ERAP1/2, are components of the M1 zinc metalloproteases, specifically the oxytocinase subfamily.Endoplasmic reticulum aminopeptidase 1 shares 49% and 43% sequence homology with ERAP2 and IRAP, respectively, mainly within conserved active site domains [22].Although ERAP1 is expressed in both humans and rodents, ERAP2 is absent in rodents and is not expressed as a full-length protein in approximately 25% of the human population, despite being present in the human genome [23,24], suggesting that its role at any rate is dispensable [25].
The peptide repertoire of cells is sustained through the antigen processing and presentation pathways.This mechanism permits the production of various ideal peptides for MHC class I molecules [26].The presentation process is the result of a sequence of steps.These peptides are shaped through the antigen processing machinery (APM) [21].The earlier stages of antigen presentation begin in the cytosol, where the proteasome or immunoproteasome undertakes the first processing event under inflammatory conditions.Abnormal proteins are degraded by the ubiquitin-proteasome system, which induces the generation of smaller peptide fragments.The proteasome/immunoproteasome cleavage pattern often results in a hydrophobic C-terminal residue that is optimal for loading into the F-pocket of most MHC class I peptidebinding grooves [22].The resulting peptides are transported to the ER via a TAP protein complex [27].
The selection of antigenic peptides that bind to MHC class I molecules is a critical step in MHC class I maturation in the ER [28].MHC class I attaches its peptide load to the endoplasmic reticulum (ER) via a peptide-loading complex.While MHC class I tends to attach peptides between eight and 11 amino acids long (the majority of which are 9 mers), multiple peptides that enter the ER can be substantially longer.Both ERAP1 and ERAP2 are two ER-resident aminopeptidases that catalyze precursor peptides and define the peptide pool available for binding to MHC class I [29].
Finally, MHC class I antigen processing and presentation pathways (Figure 2) are completed with the help of molecules and chaperones, namely, the peptide loading complex (PLC), TAP, tapasin, ERp57, and calreticulin, which result in the formation of stable peptide-loaded MHC class I molecules that can egress to the cell surface for the expression of CD8+ T cells and NK [29].
FIGURE 2: MHC class I antigen presentation pathway
Cellular proteins are hydrolyzed by the ubiquitin-proteasome pathway into oligopeptides, which are subsequently transported into the ER through the TAP transporter.In the ER, these peptides may be further trimmed by ERAP1, and then peptides of the right length and sequence bind to MHC class I molecules with the help of tapasin in a peptide-loading complex containing tapasin, TAP, calreticulim, and ERP57, or with the help of TAPBPR.After MHC class I molecules bind peptides, they are transported to the cell surface for display by CD8+ T cells [30].
ER: Endoplasmic reticulum, MHC: Major histocompatibility complex, TAP: Transporter associated with antigen processing Therefore, by modifying ERAP1 and ERAP2 activity and/or expression, ERAP SNPs have a pronounced influence on the availability and repertoire of antigenic peptides for presentation by HLA class I molecules [31], thereby influencing disease vulnerability [32].Although these two enzymes may function individually, they may also form a heterodimer to enhance trimming efficiency.The protein encoded by ERAP1 acts as a monomer or heterodimer with ERAP2 [33].Endoplasmic reticulum aminopeptidase 1 is believed to be the central enzyme in the ER that is involved in peptide trimming, whereas ERAP2 plays a minor role [34].Figure 3 illustrates the heterodimeric ERAP2/ERAP1 model.Endoplasmic reticulum aminopeptidase 1 not only plays a canonical function in the adaptive immune system through its function in the ER as an aminopeptidase processing peptide destined for MHC class I presentation to CD8+ T cells, but is also needed to repress several pro-inflammatory and innate immune responses [36].It also plays a role in the proteolytic cleavage of cytokine receptors, such as tumor necrosis factor receptor 1 (TNFR1), IL6R2, and IL1R2, which are expressed on the cell surface via receptor cleavage.The shedding of cell surface receptors by ERAP1 can control the cellular immune response by modulating receptor availability on the cell surface, which causes a reduction in proinflammatory signaling [37].
Human ERAP1 variants enhance IL-1β production by human immune cells via a mechanism that implicates K+ efflux, a well-known signal for the nucleotide-binding domain, leucine rich-containing family, and pyrin domain-containing-3 (NLRP3) inflammasome activation.However, the mechanisms underlying the ERAP1dependent immune responses remain unknown.The synthesis of inflammatory cytokines and chemokines by the innate immune system is mainly mediated by the activation of different germline-encoded pattern recognition receptors (PRRs), such as TLRs, RIG-I-like receptors (RLR), and NOD-like receptors (NLRs).The activation of these PRRs causes the coordinated activation of intracellular signaling pathways that regulate the transcription of chemokine genes, inflammatory cytokines, and other innate immune defense reactions [36].
In addition to being involved in the immune system, ERAPs contribute to cell migration and angiogenesis, which are essential processes in both pregnancy and cancer [38].The lack, or downregulation of ERAP1 expression upsets the antigen-presenting properties and immunological function of MHC class I molecules in the host defense against infection.Evidence for the role of ERAP1 in regulating blood pressure comes from in vitro studies showing that this enzyme inactivates angiotensin II through its conversion to inactive angiotensin IV and converts kallidin to bradykinin [39].Figure 4 shows the functions of ERAP1 and ERAP2.
Structure of ERAP1
Endoplasmic reticulum aminopeptidases are located in the short arm of chromosome 5q15 in a 167Kb region [41].They share two fundamental sequence motifs vital for enzymatic activity: HEXXH(X) 18E zinc binding and GAMEN substrate recognition sequences.Alternative splicing of ERAP1 produces two N-glycosylated isoforms: ERAP1a (948 amino acids and 20 exons) and ERAP1b (941 amino acids and 19 exons).Endoplasmic reticulum aminopeptidase 1b is more frequent than ERAP1a [42], and they have similar amino acid arrangements, except for some amino acids at the C-terminal and several 3'untranslated regions (3'UTR) sequences [43].Both ERAP1 and ERAP2 have a similar configuration, comprising four structural domains arranged in a concave orientation around the active site [35].
The crystallographic configuration of ERAP1 revealed four domains: domain 1 (46-254 residues), domain 2 (255-529 residues), domain 3 (530-614 residues), and domain 4 (615-940 residues), which form the final structure of ERAP1 [42].The structural domains are positioned in a concave orientation around the active site.Endoplasmic reticulum aminopeptidase 1 has been crystallized in two distinct conformations, in which the relative arrangement of the four domains is modified to either reveal or shelter a large internal cavity from the external solvent [44].
One of the crystal structures matched the closed conformation observed in other members of the M1 aminopeptidase family.The other two crystal structures captured an open conformation in which domain IV underwent rigid-body translocation away from domain II, revealing the internal cavity to the bulk solvent.Along with the reorientation of domain IV, the active site was rearranged with tyrosine 438, rotating away in a position unsuitable for catalysis.This mechanism proposes a relationship between conformational state and catalytic activity [45].Domain III acts as a hinge, allowing open-close-open transitions [46].This internal cavity can adapt to large peptide substrates, and the conformational change between these two states is crucial for catalytic activity [45].Endoplasmic reticulum aminopeptidase 2 has only been crystallized in a "closed" conformation in which the internal cavity is not accessible to the external solvent, thus making a conformational change similar to that observed in ERAP1 obligatory for product-substrate exchange [47].
The rs30187, which encodes the K528R variant, and the rs27044, which encodes the Q730E variant, are SNP variants [48] located in the ERAP1 regulatory domain (ERAP1_R), which is separated from their catalytic Nterminal domain.Endoplasmic reticulum aminopeptidase 1 favors peptide substrates with C-terminal hydrophobic residues, which have been demonstrated to anchor to a hydrophobic pocket on the binding surface of ERAP1_R [49].
Endoplasmic reticulum aminopeptidase 1 is highly polymorphic, with multiple common isoforms occurring in the general population.Functional examination revealed that ERAP1 missense SNPs affect the trimming efficiency of specific substrates [32].Several ERAP1 SNPs are also strong expression quantitative trait loci (eQTLs) [50].Endoplasmic reticulum aminopeptidase 1 haplotype combinations include more evident vulnerability consequences [51].
The rs2287987 polymorphism is located at the active site; rs30187 and rs10050860 are located at domain junctions; and rs27044 and rs17482078 are located on the inner surface of the peptide-binding cavity of ERAP1 [52].These polymorphisms influence the substrate specificity and catalytic activity of ERAP1 in a substrate-dependent manner [53].
Endoplasmic reticulum aminopeptidase 2 has only one missense SNP, rs2549782, with apparent discrepancies in the trimming efficiency between the two SNP alleles.This SNP is in ideal linkage disequilibrium with rs2248374, a splice-site SNP that influences ERAP2 splicing and results in a transcript isoform with an extended exon-10 region having a premature stop codon.The powerful eQTL effect observed for this SNP can be explained by the degradation of the ERAP2 transcript through the nonsensemediated decay pathway [32].
While ERAP1 and ERAP2 exhibit similar overall structures and mechanisms, they show significant differences, suggesting distinct roles in antigen processing.They have specific preferences for N-terminal amino acids (ERAP1 favors hydrophobic amino acids, whereas ERAP2 favors positively charged amino acids) and differ in their preference for substrate length (ERAP1 prefers peptides longer than 9 amino acids, while ERAP2 can efficiently trim shorter peptides) [44].Additionally, these enzymes have distinct effects on the cellular immunopeptidome, possibly due to differences in their internal cavities that influence enzymesubstrate interactions [54].These differences could allow them to synergize when trimming ER peptides to cover as many different sequences as possible [44].Finally, ERAP1 and ERAP2 are polymorphic, with singlenucleotide polymorphisms that affect their functions and contribute to the variability of immune responses in natural populations [29].Overall, ERAP1 plays a dominant role in antigen processing, whereas ERAP2 has supporting or complementary functions [55].Figure 5 shows the structures of ERAP1, ERAP2, and IRAP.
FIGURE 5: Ribbon and surface representations of ERAP1, ERAP2, and IRAP
A and B: ERAP1 complexes with the aminopeptidase inhibitor, bestatin, in the "closed" and "open" states; C: ERAP2, and D: IRAP complexes with a phosphinic pseudopeptidic inhibitor that is shown with orange-colored spheres.The four domains are labeled and color-coded as cyan, blue, yellow, and red for domains I, II, III, and IV, respectively.Catalytic zinc is shown as a pink sphere, and polymorphic site residues are indicated by green spheres.The modeled regions in ERAP1 that were not determined in the X-ray structures are shown in gray [56].
ERAP1 polymorphism and its association with axSpA
Endoplasmic reticulum aminopeptidase 1 has variants that alter peptide-trimming activity, specificity, and expression, even if they are distant from the active site area.Different ERAP1 genetic variants have been associated with multiple HLA class I autoinflammatory diseases, such as axSpA, Behcet's disease (BD), psoriasis, multiple sclerosis (MS), type I diabetes, essential hypertension, and susceptibility to infectious diseases such as human papillomavirus (HPV)-induced cancer, HIV, hepatitis C virus (HCV), and human cytomegalovirus (HCMV) infection [42,57,58].
In 2007, the Wellcome Trust Case Control Consortium and The Australo-Anglo-American Spondylitis Consortium (TASC) Association [59] discovered that ERAP1 polymorphisms were associated with axSpA in a GWAS on Caucasian Europeans.The mechanism of action of HLA-B27 in the pathogenesis of axSpA makes it likely that peptide supply plays an important role.This process can be obtained by characterizing the peptides bound to HLA-B27 (the peptidome) [60], either through the generation or destruction of specific epitopes, affecting the role of HLA-B27 in host defense and immune homeostasis, or through alterations in the stability of the HLA-B27 molecule [61].
Reduced ERAP1 expression increases the intracellular free heavy chain (FHC) in patients with axSpAassociated HLA-B27 molecules and correlates with increased peptide length eluted from HLA-B27 molecules.This suggests that ERAP1 activity directly influences the HLA-B27 peptidome, which affects the stability of these molecules both intracellularly and at the cell surface.The levels of cell surface FHC in axSpA patients vary in response to the particular ERAP1 SNPs these patients possess, but intriguingly, they do not correlate with the trimming activity of these SNPs [62].
Alternatively, the IL-23/IL-17 axis is postulated to be involved in the pathogenesis of axSpA.The impaired function of ERAP1 and susceptibility genetic variants in ERAP1 may culminate in the accumulation of unconventional structures of HLA-B27 in the ER, which causes an unfolded protein response (UPR) in cells from axSpA patients.Unfolded protein response in macrophages from axSpA patients results in increased production of IL-23 and IL-26, which can bind to IL-23R on CD4+ T cells and induce their differentiation into IL-17-producing inflammatory Th17 cells [63]. Lee et al. [64] found an increase in IL-17 cytokines through UPR and ER stress that was not influenced by the ERAP1 gene.
Therefore, the ERAP1 protein was highlighted in all three hypotheses regarding SpA.The arthritogenic hypothesis reveals the role of ERAP1 in cutting and regulating the sequence of antigenic peptides presented to HLA-B27.Misfolding of HLA-B27 suggests that improper peptide arrangement is the primary cause of homodimer FHC formation on the cell surface.The final hypothesis, which posits an imperfect peptidecutting process, suggests that this can increase intracellular apoptosis and ER stress.These three hypotheses reinforce the hypothesis that cellular autoinflammatory processes play a role in axSpA development.Thus, the function of ERAP1 is crucial in axSpA disease activity because it involves the processing and regulation of antigenic peptides, which is the starting point of the autoinflammatory cascade [64,65].Figure 6 demonstrates the possible roles of HLA-B27 and ERAP in axSpA pathogenesis.
FIGURE 6: Possible role of HLA-B27 and ERAP in axSpA pathogenesis
Once transported into the ER by the TAP, peptides are assembled onto nascent MHC class I molecules by the PLC, which consists of TAP, Tpn, CRT, and ERp57, before being trimmed by ERAP.The existence of specific ERAP haplotypes caused a substantial reduction in the number of peptides that were optimal for HLA-B27, which led to an accumulation of misfolded proteins in the ER or the expression of suboptimal or neoantigen-loaded HLA-B27 on the cell surface.The presence of neoantigen-loaded HLA-B27 on the cell surface triggers CD8+ T cell activation.Misfolding of proteins in the ER can trigger ER stress and initiate the UPR, which in turn leads to the secretion of IL-23.IL-23 then activates the IL-23/IL-17 axis.During HLA-B27 recycling through the endocytic pathway, free heavy chains or disulfide bond-linked homodimers of HLA-B27 are formed and expressed on the cell surface.Engagement of these aberrant species by KIR3DL2 on the surface of Th17 cells enhances their survival, proliferation, and IL-17 expression.IL-17 can promote the release of pro-inflammatory cytokines to induce inflammation [48].Approximately 60% to 90% of patients with axSpA worldwide carry HLA-B27.The risk of developing AS is as high as 5% to 7% in HLA-B27-positive individuals.The genetic association of the aminopeptidase polymorphisms ERAP1 and ERAP2 with axSpA is the second strongest after HLA-B27, accounting for 15% to 25% of the population risk [37].The genetic interaction between ERAP1 and HLA-B27 in axSpA indicates that peptide cleavage and presentation contribute to axSpA susceptibility [66].Together, HLA-B27 and ERAP explain 70% of the genetic risk of developing SpA [37].The bulk of ERAP1 SNPs associated with SpA are located close to the catalytic site (aa residues 346 and 349), in the binding groove (aa residues 725 and 730), or close to locations that potentially affect conformational rearrangements (aa residues 528 and 575).Further, SNPs are also found in inter-domain sites, or domain IV, a regulatory region responsible for Cterminal residue peptide binding [67].Controversial results have been reported regarding the relationship between ERAP1 SNPs and axSpA susceptibility.
The rs27044 Polymorphism
Multiple ERAP1 features are affected in a length-dependent manner by rs27044 [68], which may lead to susceptibility to axSpA [39].The rs27044 gene encodes the Q730E amino acid substitution, which is correlated with modifications in peptide length preference and trimming specificity [52,69,70].
In 2010, the first confirmation in a non-Caucasian population was that genetic polymorphisms in ARTS1 (SNP rs27044) were associated with axSpA, implicating common pathogenetic mechanisms in Korean and Caucasian patients with axSpA [71].Choi et al. [72] discovered that rs27044 was associated with axSpA in Asians and Caucasians.However, a later analysis by Lee et al. found that the association only existed in the general population and Caucasians but not in Asians [73].Wang et al. [74] found that the rs27044G allele was a predisposing factor for axSpA in Taiwanese.A meta-analysis of 26 case-control studies with 31 cohorts concluded that rs27044 is significantly correlated with axSpA in Asians and Caucasians [75].Correspondingly, Chen et al. [76] found a significant association between rs27044 and axSpA, although their meta-analysis included only six studies with limited statistical power.
In contrast, in their meta-analysis and bioinformatics analysis, Bai et al. [77] discovered a lack of association between rs27044 and axSpA susceptibility.Similarly, the meta-analysis by Lee et al. [73] demonstrated that no association between rs27044 and axSpA could be found in Middle Easterners and East Asians, but it was found solely in Europeans.Another meta-analysis published in 2018 by Jiang et al. [78] explored the relationship between ERAP1 polymorphisms and susceptibility to axSpA in the East Asian population and found a significant difference between axSpA susceptibility and polymorphisms of rs27044.
Two Mexican studies conducted separately by Fernández-Torres et al. and Martínez-Nava et al. [79,80] reported that the rs27044 polymorphism of the ERAP1 gene was not significantly associated with axSpA.Similarly, SNP rs27044 was not reported to be associated with the disease in Spanish [81] or Chinese populations [82].Finally, Cai et al. [83] found no association between rs27044 polymorphism and axSpA in the general population.
The rs30187 Polymorphism
The ERAP1 rs30187 SNP encodes a lysine or arginine at position 528 [84,85].The lysine/arginine 528 substitution in biochemical assays affects ERAP1's peptide hydrolysis activity [52].It has been linked to autoimmune disorders in epistasis with specific MHC alleles [84,85], and modifies the set of peptides presented by MHC [68,86].The rs30187 polymorphism, at position 1583 in exon 11, induces a substitution from C to T (R528AK), and several investigations have shown that R528 reduces the activity of ERAP1 [8,87].The rs30187, which encodes the K528R amino acid replacement, reduces the efficacy of peptide trimming by affecting the kinetic process of modifying ERAP1 from active to inactive [69].Its localization near the entry point of the substrate pocket could affect substrate affinity with the enzyme and decrease ERAP1 activity [88].Owing to its lower function, the rs30187 allele, which has slower rates of peptide trimming than wildtype ERAP1 (40%), is protective [89].Sanz-Bravo et al. [69] discovered that rs30187 influences ERAP1 activity by influencing the N-terminal flanking residues, peptide length, internal sequence, and HLA-B27 affinity.
Similar to rs27044, rs30187 is associated with axSpA in Asians and Caucasians [72].Nonetheless, their subsequent analysis discovered that the association existed only among the general population and Caucasians but not among Asians [73].The SNP rs30187 of ERAP1 was significantly associated with axSpA in Koreans [71] and Taiwanese [74], whereas the T/T genotype was associated with axSpA compared to the C/C genotype in the Iranian population [90].Gao et al. [75] discovered in 2020 that, although the positive association was observed under the allelic model in Asians and Caucasians, genotypic comparisons justified the association only in Caucasians but not Asians.Similarly, Wang et al. [91] noticed that the ERAP1 SNP rs30187 had significantly different genotype and allele distributions between patients with axSpA and healthy controls.
Chen et al. supported this significant link between rs30187 and axSpA in a meta-analysis consisting of 8,530 axSpA patients and 12,449 controls [76].Another meta-analysis by Cai et al., consisting of 24,271 axSpA patients and 42,666 controls, supported this link as well [83].The ERAP1 rs30187 polymorphism was not associated with axSpA in the Zhejiang [92], Turkish [93], or Algerian populations [94].Moreover, 534 Caucasian patients with axSpA and 830 healthy controls were included in a meta-analysis and bioinformatics analysis; there was no significant association between the minor allele of rs30187 and axSpA susceptibility [77].
The rs26653 Polymorphism
The ERAP1 rs26653 genotype may influence the adenosine triphosphate (ATP)-binding reserve and transport efficacy of TAP or alter the substrate specificity and proteolytic capability of the immunoproteasome [95].Cinar et al. [93] were the first to address the relationship between ERAP1 and axSpA in a Turkish population.They confirmed that the frequency of the rs26653 SNP was higher in patients than in controls.In addition, Küçükşahin et al. [95] reported that, in a Turkish population, there was a statistically significant difference in the frequency pattern of the rs26653 SNP C/C homozygous genotype in axSpA patients, and the frequency of the rs26653 risk allele was higher in axSpA patients than in controls.In a more recent study, Wang et al. [91] examined how prevalent ERAP1 allelic variants (single nucleotide variant (SNV) haplotypes) in Taiwan affect ERAP1 function and axSpA susceptibility in the presence or absence of HLA-B27 and found that the ERAP1 SNP (rs26653G > C) had significantly different genotype and allele distributions between 863 axSpA patients and 1438 healthy controls.In contrast, an earlier study did not support the contribution of rs26653 to axSpA pathogenesis, particularly in HLA-B27-positive patients [8].
The rs27037 Polymorphism
In 2018, a meta-analysis revealed that rs27037 is significantly associated with axSpA [78].Another metaanalysis published in 2015 found that rs27037 was positively associated with the risk of axSpA in Caucasians and Asians [83].The same observation was obtained in a bioinformatics analysis of genetic variants of ERAP1 in axSpA; SNP rs27037 was statistically significant in a combined European and Asian study [88].Lee and Song [73] also reported a significant association between rs27037 polymorphism and axSpA susceptibility in European and Asian populations.Furthermore, the ERAP1 rs27037 polymorphism locus is highly associated with axSpA in the Chinese population [92].
The previous finding did not agree with that of Su et al. [96], who conducted a case-control association study and meta-analysis to assess whether SNPs in ERAP1/ERAP2 and RUNX3 confer susceptibility to axSpA in Han Chinese individuals.The case-control study between HLA-B27-positive patients and healthy controls failed to demonstrate an association between rs27037 and axSpA.Moreover, the meta-analysis revealed no association between rs27037 in ERAP1 and the disease.Tang et al. [97] also examined the association between five polymorphisms in the ERAP1 gene and the risk of axSpA in a Chinese population; they failed to provide evidence for an association between rs27037 polymorphisms in ERAP1 and axSpA risk.Zhang et al. [84] could not confirm the association of axSpA with ERAP1 SNP rs27037.Likewise, a Turkish study by Akbulut et al. [98] found no risk association between axSpA and rs27037 polymorphism in the studied population.
The rs27434 Polymorphism
Li et al. [99] conducted a case-control association study to determine whether ERAP1 is also associated with the incidence of axSpA in a Chinese population and whether it is correlated with clinical features.Their results showed that SNP rs27434 was significantly associated with the disease, which is consistent with the results of Liu et al. [92], who found that the locus of the ERAP1 rs27434 polymorphism was at a high significance level with axSpA in the Zhejiang population.Another study confirmed a weak association between ERAP1 rs27434 and axSpA in the Beijing Han Chinese population [84].In Iran, the rs27434 G/G genotype was found to be inversely associated with axSpA compared to the A/A genotype [90].
The rs27980 Polymorphism
In a recent study, Wang et al. [91] discovered that the ERAP1 intron SNP (rs27980A > C) had significantly different genotype and allele distributions between patients with axSpA and healthy controls.Similarly, Liu et al. [92] found that the locus of the ERAP1 rs27980 polymorphism was at a high significance level with axSpA in the Zhejiang population.The rs27980C allele appears to be a modest risk factor for axSpA susceptibility in Taiwanese individuals [74].Nevertheless, Zhang et al. and Cinar et al. [84,93] could not confirm the association with axSpA using SNP rs27980 in Turkish and Beijing Han Chinese populations, respectively.
The rs10050860 Polymorphism
Recently, genotype T of the rs10050860 SNP of the ERAP1 gene was found to have a protective effect on axSpA in the population of Western Algeria [94].The same conclusion has been reported in Korean [71], Turkish [93], and Zhejiang populations [92].Likewise, this finding was consistent with Bai et al.'s [77] metaanalysis and bioinformatics analysis.
The rs17482078 Polymorphism
The ERAP1 SNP rs17482078 showed findings similar to those of rs10050860.It has a protective effect with respect to axSpA in the Korean [71], Turkish [93], and Zhejiang populations [92].According to Bai et al. [77], there is no significant association between the minor allele of rs17482078 and axSpA susceptibility.
The rs2287987 Polymorphism
The rs2287987 (Met349Val) is located close to the catalytic center and affects enzyme activity [100].A GWAS performed on British individuals first described the association of ERAP1 rs2287987 with susceptibility to axSpA [59].The association between these polymorphisms and AS was observed in Spanish [81], Portuguese [101], Hungarian [102], Polish [103], and Iranian [42] populations.Another meta-analysis showed that rs2287987 seems to be associated with AS in Caucasians and overall populations but not in Asians [83].Bai et al. [77] discovered that the minor allele of rs2287987 is a protective factor against axSpA in the HLA-B27positive population.Furthermore, there was no significant association between rs2287987 and axSpA in the Korean [71] and Turkish populations [93].
The rs27037 Polymorphism
Evidence for the involvement of SNP rs27037 in axSpA has been demonstrated in several studies [74,91,92].This involvement could not be confirmed in the Turkish [93,98] and Beijing Han Chinese populations [84].
ERAP1 and HLA-B27 interactions
Endoplasmic reticulum aminopeptidase 1 may function in tandem with HLA-B27 molecules, and ERAP1 polymorphisms may result in an abnormal peptide-HLA (pHLA)-B27 repertoire linked to pathogenic immune responses.Furthermore, abnormal pHLA can be unstable and misfolded [104].Misfolded HLA-B27 molecules can cause ER stress or appear as surface HLA class I-free heavy chains (FHCs), resulting in abnormal immune interactions with various receptors [105][106][107][108]. Direct or indirect changes in the ERAP1-HLA-B27 interaction could be critical, causing changes in peptide presentation, creating free heavy chains by HLA-B27 molecules, and contributing to differential subtype associations in the SpA [109].
In their review, Zambrano-Zaragoza et al. [110] demonstrated a consistent association between ERAP1 and axSpA in HLA-B27-positive cases.Abnormal peptide trimming or presentation of ERAP1 and HLA-B27 play a role in the pathogenesis of HLA-B27-associated axSpA.Wang et al. [91] found that the allele distributions of ERAP1 SNPs (rs26653, rs26618, rs30187, rs469783, rs27044, and rs27037) differed significantly between HLA-B27-positive patients and HLA-B27-negative patients.Wang et al. [74] examined whether ERAP1 SNPs are linked to axSpA susceptibility and disease severity in Taiwanese individuals.They demonstrated that ERAP1 SNPs are associated with HLA-B27 positivity in Taiwanese patients with axSpA.These findings support the idea that ERAP1 and HLA-B27 play complementary roles in axSpA pathogenesis in humans.Their results also indicated that abnormal antigen processing by ERAP1 and antigen presentation by HLA-B27 may be essential pathways in the development of axSpA.In contrast, patients with axSpA who are negative for HLA-B27 may develop pathological immune responses via other unidentified biological pathways.
Several studies [84,89,92,95,[110][111][112][113] confirmed an epistatic interaction between the ERAP1 SNPs and the HLA-B27 loci.This interaction was not discovered by Bai et al., who found no significant association between the minor alleles rs30187 and rs10050860 and axSpA susceptibility in an HLA-B27-positive population in a recent meta-analysis [77].Similarly, Asmaa et al. [94] investigated the roles of rs30187 and rs10050860 in the presence and absence of HLA-B27.They found no link between HLA-B27 positivity or negativity and the frequency of ERAP1 SNPs (rs30187 and 10050860).Similarly, Cinar et al. [93] found no association between HLA-B27 positivity and the ERAP1 genotype frequency distribution.Again, no significant association was found between the frequency of ERAP1 and that of B27:05 or B27:02:01.Similarly, Zhang et al. [84] found no correlation between ERAP1 SNP rs27037 and axSpA in either HLA-B27negative or HLA-B27-positive axSpA groups.
Association of ERAP1 SNPs with axSpA clinical parameters
Recognizing the genetic variables that affect functional severity would improve functional status prediction in patients with axSpA.The immune system and bone development share cellular and molecular signaling pathways that regulate hematopoietic cells and bone homeostasis [114].According to animal models, inflammation and new bone formation are not related [115,116].Clinically, various anti-TNF drug treatments repress inflammation but do not slow structural development, according to the modified Stoke ankylosing spondylitis spinal score (mSASSS) [117,118].These findings suggest that syndesmophyte development is likely attributable to the intrinsic genetic effects of ERAP 1 on the p/MHC class I complex formation [74].Szczypiorska et al. [81] were the first to report a link between SNPs in ERAP1 and the axSpA functional status.Likewise, Wang et al. [74] discovered that the SNPs rs27044 and rs30187 were linked to syndesmophyte formation.Both studies have suggested that ERAP1 is related to disease severity.
In an Iranian cohort, rs10050860 was strongly associated with the bath ankylosing spondylitis functional index (BASFI) score [42].The SNP rs27044G and SNP rs30187T allele carriers are prone to developing syndesmophytes in axSpA patients, indicating that ERAP1 cSNPs may affect axSpA disease severity.After controlling for HLA-B27 positivity, SNP rs30187 remained significantly associated with syndesmophyte formation, whereas SNP rs27044 was marginally associated.The SNPs rs27037 and rs27980 were not significantly associated [74].Küçükşahin et al. [95] found that the mean bath ankylosing spondylitis disease activity index (BASDAI), BASFI, bath ankylosing spondylitis metrology index (BASMI), and ankylosing spondylitis disease activity score-C-reactive protein (ASDAS-CRP) values were greater among those with the ERAP1 rs26653 C/C SNP genotype than other patients; the differences were statistically significant.However, they reported similarities between patients with different rs26653 SNP genotypes (C/C, C/S, or G/G) regarding the frequency distributions of many clinical (presence of peripheral arthritis or uveitis) and demographic (sex, family history of SpA, age at disease onset) characteristics, except for individuals with enthesitis, who had the rs26653 C/C SNP genotype more frequently than those without it.Li et al. [99], in their case-control association study, concluded that except for the SNP rs27510, which was significantly correlated with onset age in patients with axSpA, none of the examined ERAP1 SNPs showed significant association with the demographic and clinical measurements of axSpA (age, sex, family history, onset site, dactylitis, peripheral arthritis, hip joint involvement, iritis, enthesitis, ESR, and CRP).Bugaj et al. [119] found that patients with the ERAP1 rs2287987 AA genotype more frequently presented with enthesitis; in addition, ERAP1 rs2287987 affected the initial CRP value among Polish patients, but this relationship was not statistically significant after Bonferroni correction.
In contrast, Asmaa et al. [94] found no differences in BASDAI, BASFI, or CRP levels in patients with axSpA carrying rs30187 and rs10050860 polymorphisms.Nossent et al. [120] investigated the role of ERAP1 variants in the axSpA clinical phenotype.They discovered that the ERAP1 rs27044/rs30187 haplotype C/T is associated with a lower risk of extraspinal disease and systemic inflammation in Nordic patients with axSpA.However, no link was found between the ERAP1 haplotype and proinflammatory cytokine levels.
The SNPs of the ERAP1 gene correlated with the mSASSS and showed no correlation with the ASDASerythrocyte sedimentation rate (ASDAS-ESR).Significant differences were observed in the SNP ERAP1 gene on ERAP1 and IL-17A levels in subjects with lipopolysaccharide and IFN-γ induction, but no significant difference was observed in IL-23 levels [121].In a Portuguese axSpA cohort, Pimentel-Santos et al. [122] found no association between ERAP1 SNPs (including rs27044 and rs30187) and BASDAI, BASMI, mSASSS, or disease duration.Furthermore, Cinar et al. [93] discovered that the rs26653 SNP linked to axSpA risk was unrelated to disease activity or functional scores in a Turkish population.Additionally, no significant associations were found between carriages of the allele rs27044 and sex, past or present peripheral arthritis, age at first complaint, years between these first complaints, or the diagnosis of axSpA [123].
A French study [124] investigated the association between SNPs located in ERAP1 and the sacroiliac joint (SIJ) and spinal MRI inflammation in early-onset SpA.One SNP located in ERAP1 (rs27434) and the haplotype CCT of ERAP1 were associated with SIJ inflammation detected by MRI, but these associations were below the Bonferroni-corrected threshold of significance.In contrast, no relationship was found between ERAP1 SNPs (rs30187, rs27044, rs27434, rs17482078, rs10050860, and rs2287987) and axSpA activity as measured by SIJ inflammation on MRI, BASDAI score, ASDAS-CRP, and CRP in the French population.
Conclusions
The role of HLA-B27 in axSpA pathogenesis is unclear.However, over the past decade, ERAP1 has been shown to play an important role.Identifying additional non-MHC susceptibility loci for axSpA, such as ERAP1, is of particular interest because it highlights the critical biological pathways involved in SpA pathogenesis, which may have a potential therapeutic impact.Modulating ERAP1 function through the design of inhibitors may be a vital tool for changing immune responses in SpA.The effect of ERAP1 polymorphisms on susceptibility to axSpA may vary among ethnic groups.The epistatic interaction between ERAP1 SNPs and the HLA-B27 loci in axSpA pathogenesis has been confirmed in several studies.However, this interaction requires further investigation in different ethnicities.The association of ERAP1 SNPs with the clinical assessment of axSpA is inconsistent for various disease-activity parameters.Therefore, it is crucial to identify ERAP1 genetic variations in different populations to gain insight into its role in the susceptibility and severity of axSpA.
any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
FIGURE 1 :
FIGURE 1: Multi-step pathogenesis of axSpA axSpA: Axial spondyloarthritis, HLA: Human leukocyte antigen, TNF: Tumor necrosis factor receptor Figure adapted from Fatica et al. [17].This article is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Figure
Figure adapted fromDhatchinamoorthy et al. [30].This article is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
FIGURE 3 :
FIGURE 3: Illustration of the heterodimeric ERAP2/ERAP1 model A: The proposed heterodimeric ERAP2/ERAP1 model B2 from a representative snapshot taken from a 100-ns MD simulation.The snapshot is the centroid of the highest-populated cluster of conformations, representing 64% of the trajectory within a 2 Å RSMD of all Cα atoms.B: Close-up view showing the two key salt-bridge interactions between helix 8 of ERAP1 (orange C atoms) and ERAP2 (cyan C atoms).Disulfide bridges that stabilize exon 10 loops are shown as sticks.C: Close-up view of the dimeric interface illustrating the hydrophobic/aromatic interactions between ERAP1 (orange C atoms) and ERAP2 (cyan C atoms) [35].ERAP: Endoplasmic reticulum aminopeptidase Figure adapted from Papakyriakou et al. [35].This article is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
FIGURE 4 :
FIGURE 4: Illustration of various functions of ERAP1 and ERAP2 Endoplasmic reticulum aminopeptidases play diverse roles in various biological processes, including (a) the final step in peptide trimming in the ER for presentation on MHC class I molecules; (b) the shedding of several cytokine receptors; (c) postnatal angiogenesis; and (d) the regulation of blood pressure [40].ERAP: Endoplasmic reticulum aminopeptidase, ER: Endoplasmic reticulum, TNFR1: Tumor necrosis factor receptor 1, ACE: Angiotensin-converting enzyme, PDK1: Pyruvate dehydrogenase kinase 1, VEGFRs: Vascular endothelial growth factor receptors, MHC: Major histocompatibility complex Figure adapted from Wu et al. [40].This article is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Figure adapted fromPapakyriakou and Stratikos [56].This article is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Figure adapted fromKavadichanda et al. [48].This article is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. | 2023-11-16T05:19:26.171Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "1d3083ef309700c413f27b0ef96a24d5b1e1cc63",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1d3083ef309700c413f27b0ef96a24d5b1e1cc63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2169344 | pes2o/s2orc | v3-fos-license | Gauge independent effective gauge fields
The problem of gauge independent definition of the effective gauge field is considered. The Slavnov identities corresponding to a system of interacting quantum gauge and classical matter fields, playing the role of a measuring device, are obtained. With their help, in the case of power-counting renormalizable theories, gauge independence of the effective device action is proved in the low-energy limit, which allows to introduce the gauge independent notion of the effective gauge field.
Introduction
Description of quantized fields by means of the effective action (EA) is the most general in quantum theory. Being the sum of all one-particle-irreducible diagrams EA for a given theory allows to calculate any Green function in this theory. Well known that it also can be given a nonperturbative definition as the Legendre transform of the Green functions generating functional logarithm. Formal analogy between the classical equations of motion and quantum equations describing dynamics of the mean fields suggests natural interpretation of EA as the quantum substitute for its classical counterpart. However, explicit dependence of EA on the way the theory is quantized lacks direct physical application of this remarkable analogy. The most important kind of such a dependence, which attracts our attention in this Paper, is the gauge dependence of EA for gauge theories.
It is not our purpose to anew investigate various procedures formulated by many authors in attempts to construct a gauge independent object from EA. Instead, we would like to pay attention to a possible physical reason for the gauge dependence of EA, recently pointed out by Dalvit and Mazzitelli [1]. In the case of quantum gravity they showed that the motion of a classical device measuring the effective gravitational field is independent of the choice of gauge conditions fixing the general coordinate invariance. More precisely, the equations of motion (geodesic equation) of a test particle in the effective static gravitational field of a point mass, calculated in the one-loop approximation up to leading logarithms was shown to be independent of the choice of linear gauge.
The point is that while the graviton-test particle quantum interaction is negligible in calculation of the total effective gravitational field, it is not when the equations of the test particle motion are to be determined. It turns out that in the latter case the gauge dependent part of the contribution due to graviton-test particle interaction just cancels that corresponding to the ordinary gauge dependence of the mean field.
This fact offers a tempting possibility to change our plain view on the problem of gauge independent definition of the effective gravitational field, and look at it through a prism of the measurement. In other words, we can try to describe the effective gravitational field in terms corresponding to the measuring device. For example, in the case considered in [1] it is the form of the equations of the test particle motion by which the effective gravitational field is implicitly described.
Whether a proper definition of the effective field can be given in this way, depends on resolution of the following questions: 1. Whether the special choices of the source for the gravitational field and of the measuring device made in [1] are essential for the aforementioned cancellation.
2. Whether this cancellation holds at any order of the loop expansion and for all energies (not only for the one-loop low-energy leading quantum corrections).
3. If the effective gravitational field is described through characteristics of the measuring device, is such a description actually independent of the choice of device, for the concept of the effective action to be self-contained. 4. Is all of this inherent to the gravitation, or represents a general property of gauge interactions.
The purpose of this Paper is to show that the answer to 1.,4., and to the first part of 2. is really positive, i.e. the low-energy leading quantum corrections to the equations of motion of any kind of classical matter (infinitely weak) interacting with the gravitational or any other gauge field are gauge independent at any order of the loop expansion. In sec.2 we introduce notations and display some basic tools used later in investigation of EA properties. In sec.3 the Slavnov identities for the generating functionals of the Green functions corresponding to the system gauge field plus device are derived, on which basis the renormalization equations for divergent parts of these functionals are obtained in sec.4. These equations allow to demonstrate the gauge dependence cancellation most generally. In sec.5 we briefly discuss the rest of the problems listed above, and make conclusions.
The quantum effective action
The reason for the cancellation of the gauge dependence found by Dalvit and Mazzitelli may lie, of course, only in the residual symmetry of the Faddev-Popov quantum action for the gauge field, the Becci-Rouet-Stora-Tyutin (BRST) symmetry [2]. Having the form of the ordinary gauge transformation for the gauge and matter fields, the latter is indifferent to the specific structure of the classical action for these fields. Therefore, following the standard procedure of derivation of the Slavnov identities for the generating functionals of the Green functions, we can try to obtain analogous identities for the system of the gauge field plus measuring device in the most general form.
We consider a general type gauge theory described by an action S(A a , φ i ), where A a , a = 1, ..., n denotes the gauge field and φ i , i = 1, ..., m -matter fields of any kind.
If the pure gauge theory describes free fields A a , then a number of quantum matter fields interacting with A a should be included in φ. However, for notation simplicity we suppose that the gauge field is self-interacting and φ contains only classical matter fields. Furthermore, the part of φ corresponding to the sources for A a can be omitted since any desired A-field configuration can be formally obtained by appropriate choice of the standard source term J a A a which is normally introduced into the generating functional of the Green functions. Thus, we suppose the fields φ i to describe the measuring device only. The latter is a classical object in the ordinary sense that the low-energy quantum corrections to its equations of motion due to propagation of the φ-fields can be neglected, which usually means that the device should be sufficiently heavy. Following [1] we also require the device contribution to the total gauge field to be infinitely small. One could suppose, for example, that the coupling constants of the gauge field-device interaction are sufficiently small. Since, however, it is not always possible to choose these constants arbitrary small 1 , we simply imagine that the device action enters the full action with a small overall coefficient.
It is problematical to satisfy the above requirements in the case of gravity, since they contradict to each other. Even more: in this case we cannot satisfy the first of them alone because there is no such a thing as the classical source for gravity, as was pointed out in [3]. Therefore, in the case of measurement of the effective gravitational field we are forced to introduce the classical form for the device action "by hands".
Let the action S(A, φ) be invariant under the following (infinitesimal) gauge transformations where D aα (A),D iα (φ) are the generators, and ξ α , α = 1, ..., N are arbitrary gauge functions of the gauge transformations. We suppose that these generators form a closed algebra where the "structure constants" f γαβ are some linear differential operators which we assume to be field-independent, for simplicity. Commas followed by indices denote functional differentiation with respect to the corresponding fields, and DeWitt's summationintegration on repeated indices is supposed.
To fix this invariance we impose an arbitrary gauge condition F α (A) = 0. For simplicity, we suppose that it is linear in the field A: F α (A) ≡ F α,a A a , where F α,a is some (differential) operator independent of the fields. Weighted in the usual way this gauge condition enters the Faddeev-Popov (FP) quantum action S f p in the form of the gauge-fixing term ξ being the weighting parameter. Introducing FP ghost fields C α ,C α we write the FP quantum action λ being a constant anticommuting parameter.
To be able to write down the Slavnov identities for the generating functional of connected Green functions we introduce sources for the BRST transformations following Zinn-Justin [6], and obtain the quantum action Σ in the form Introduction of the sourceK is dispensable sinceD iα C α is linear in quantum fields. However, it allows to write the Slavnov identities in the form containing no explicit information on the structure of the gauge algebra, and in addition to that, to omit all ghost sources from the very beginning (these sources should be restored, however, in renormalization of the theory). Below we consider the most important kind of the gauge dependence of EA, namely its dependence on the weighting parameter ξ. The general case contains no principal complications, and can be handled, for example, by extending the field content of the theory to include a number of auxiliary fields introducing the gauge, and employing the method of anticanonical transformations ( see, e.g., [4]). A natural way of investigation of the ξ-dependence is to introduce the term Y being a constant anticommuting parameter, into the quantum action [5]. Thus we write the generating functional of the Green functions as 2 φ-fields are not integrated in Eq. (8). Following [1] we consider them as c-functions, and the absence of these fields in the integral measure reflects the fact that we neglect all the quantum ]contributions due to their propagation. Diagrammatically, this situation is illustrated in Fig.1. Fig.1(a) represents a typical vertex of the gauge field-device interaction according to the standard definition of the effective field as the quantum average of the corresponding field operator. It is implied by this definition that the mean field is simply put into the classical equations of device motion instead of its tree value. Such vertices are local, unlike those given by the generating functional (8) and represented in Fig.1(b). To determine effective equations of device motion we have to consider the sum of diagrams like that pictured in Fig.1(b), each having only one insertion of a φ-vertex, since the device action is supposed to be infinitely small. Now, introducing the generating functional of the connected Green functions we define the effective action Γ as the Legendre transform of W with respect to the mean gauge field (denoted by the same symbol as the corresponding field operator): where the function J(A) is implicitly defined by Eq. (10).
In the standard interpretation the reciprocal of Eq. (10) are the effective equations of motion for the full quantum corrected field A corresponding to the given background field configuration A 0 satisfying The use of J as the source for the field A 0 instead of realistic matter sources, though formal, allows to simplify the derivation of the Slavnov identities below. As always, the source J satisfies the "conservation law" where A 0 is the solution of Eq. (13), satisfying 3 The Slavnov identities
Preliminaries
Being a classical object the measuring device is completely described by its action. We can therefore investigate the gauge dependence of the latter rather than of the corresponding equations of motion (as was done in [1]). The action for the measuring device is the part of Γ containing the fields φ i . Its gauge dependence is determined by the Slavnov identities for the generating functional of proper vertices corresponding to (8). However, these identities are complicated because of the peculiar role of the device action which is a kind of source for the gauge field. It is the nonlinearity of this source on the fields which complicates the usual derivation of the Slavnov identities. Fortunately, in the present case we can limit ourselves by derivation of the Slavnov identities for the functional W only. Indeed, the gauge dependent part of the device action is a sum of two different contributions. The first is the ordinary explicit gauge dependence of the effective action. The second results from implicit gauge dependence of the mean field A. Being a solution of the gauge dependent effective equations (12) the latter is also gauge dependent. It is precisely this gauge dependence of A which lacks its physical interpretation. Thus, denoting by Γ φ the part of Γ containing φ-fields we have for the full variation of the device action under a small change δξ of the gauge parameter ξ: In (15) the derivative ∂A a /∂ξ is calculated keeping J fixed in accordance with the meaning of J as producing the given classical field A 0 . Now note, that if we define the quantity W φ by analogy with Γ φ , i.e., as the part of W containing φ, then since the device action is supposed to be infinitely small. Comparing (15) and (16) we arrive at the following important relation Thus, perhaps needed in carrying out the renormalization program, the Slavnov identities for Γ turn out to be unnecessary in our consideration.
Let us now go over to the successive derivation of the Slavnov identities for W.
Derivation
Following the standard procedure (see, e.g., [6]) we perform a BRST shift (5) of integration variables in (8). Unlike the usual case, however, the quantum action (6) is not invariant under this operation, since besides the quantum fields A, C,C it contains the classical field φ which is not integrated in (8). Therefore, we obtain the following identity 3 where S φ is the classical device action.
Then the first term in square brackets in the left hand side of (18) can be transformed as where locality of generatorsD(φ), and the property δ(0) = 0 were taken into account. The latter also implies that the third term in square brackets in (18) is equal to zero. Indeed, performing a shiftC →C + δC of integration variables in the functional integral (8) we obtain the quantum ghost equation of motion from which follows that 4 Thus, the identity (18) can be rewritten as This is the sought identity for the generating functional of the Green functions. It can be called effective Slavnov identity, since it is obtained under certain conditions concerning the device motion. In terms of the functional W it looks like In the next section (24) will be used to prove the low-energy gauge independence of the effective device action. 4 The gauge dependence cancellation
The renormalization equation
Definition of the device as a classical object, reflected in the way its action is introduced into the generating functional Z, implies certain conditions under which the device motion can be considered in such a way, namely, it corresponds to the effective description of the device motion at low energies. Well known [3,7], that in this case the leading quantum contribution to EA is due to non-analytical terms in the amplitudes, containing the logarithms of external momenta. On the other hand, the form of the latter can be simply read off from divergent parts of the amplitudes, since it is propagation of massless particles of the theory, which dominates at low energies (see, e.g., [3,8]). For example, in the case of dimensionally regularized Feynman integrals of the type where ε -dimensional regulator, µ -mass scale, and f (q, p 1 , ..., p n ) -the result of all subintegrations, the low-energy leading contributions, corresponding to some powers of the logarithms of the external momenta p 1 , ..., p n , are given by zero order terms in the Loran expansion for (25) in powers of ε, and unambiguously determined by the poles of (25). Thus, to determine the full gauge dependence of the device effective action, it is sufficient, in view of the relation (17), to investigate that of the divergent parts (W div ) of the generating functional W .
To do this, we use the Slavnov identity (24) to obtain the renormalization equation for W div . Namely, we first separate the Y -dependent part of W and substitute it in (24). Comparing multiples of Y from the left and right hand sides of this identity then gives where all the sources except J a are set equal to zero after differentiation. Next, we extract the φ-dependent part of (26) and obtain the following identity where the symbol W 2φ denotes the part of W 2 independent of the gauge field-device interaction. All of these identities are derived for invariantly regularized, but still unrenormalized functionals. Being connected with the high-energy behavior of the Green functions the renormalization of EA is immaterial in determination of the low-energy quantum corrections to the device motion. On the other hand, since we use the formal correspondence between divergences of EA and the form of logs in reconstruction of the leading quantum contributions to Γ φ , the effect of renormalization on the structure of divergences might seem to be important for us. However, as we have mentioned above, the use of the generating functional of the Green functions in the form of Eq. (8) is justified only in the low-energy regime of the device motion. Instead, the renormalization of the theory must be carried out, of course, in terms of the ordinary generating functional for which Eq. (8) is just an effective expression, and in which all the fields, including those corresponding to the measuring device, are considered as quantum. Thus, at each given order of the loop expansion it has to be supposed that all the subdivergences of the Green functions have been eliminated at lower orders according to the standard procedure, so that the only superficially divergent diagrams are in rest. It is the general result of the renormalization theory [4,6] that this procedure can be arranged in the way that preserves the symmetry properties of the generating functionals of the Green functions. Thus, we suppose that the functional W renormalized, say, up to (n − 1) th-loop order, satisfies the identity (27) 5 and has local divergences of order n.
As follows from Eq. (17) divergences of ∂W 1φ /∂ξ are to be determined after the substitution J → J(A) has been made. As always, this means that the corresponding one-particle-irreducible 6 diagrams should be considered only. In the present case, however, one may substitute J → J(A 0 ) directly, the function J(A 0 ) being determined by Eq. (13). Indeed, additional divergences associated with the reexpressing of the right hand side of Eq. (27) in terms of the mean field A, can appear, by assumption, only in the n th-loop order. However, they actually do not contribute at this order, since the right hand side of Eq. (27) vanishes at the zeroth order, as one can easily verify 7 .
Thus, splitting W φ into the sum of divergent and convergent parts and noting that the corresponding parts of the identity (27) must cancel independently, we obtain the renormalization equation for W div(n) φ where the superscript (0) denotes the zeroth order approximation. Let us now turn to examination of the right hand side of Eq. (28).
The power counting
We begin with definition of vertices and field propagators in the loop expansion. According to the standard procedure, one expands the exponent of the integrand in Eq. 5 Strictly speaking, in derivation of the Slavnov identities for renormalized generating functionals Z, W, W φ a possible implicit gauge dependence of the counterterms should be taken into account, which results in additional divergent structures appearing in these identities [9]. However, we omit them in the effective Slavnov identities (23), (24) since these additional terms describe purely high-energy properties of the underlying theory. 6 Irreducible with respect to A-lines. 7 This corresponds to the fact that at the tree level the device action is obviously independent of the gauge parameter ξ weighting the gauge condition. 8 The term −iδ 2 W div(n) 2φ /δφ i δK i is omitted in Eq. (28), since it is proportional to δ(0) due to locality of divergences.
where the ellipsis denote terms of cubic and higher order in the quantum fields a ≡ A − A 0 , C,C. Note that in view of Eq. (14) the term Y F α (A 0 )C α is absent in this expansion. Therefore, the second term in the right hand side of Eq. (28) vanishes identically.
For further examination of Eq. (28) it is necessary to employ the dimensional analysis. At this point we have to limit our general consideration and require the theory to be power-counting-renormalizable. Although quantum consequences of the original gauge symmetry of the classical action are normally expressed in the same form (like that of Eq. (28)) at all orders of the loop expansion even despite possible deformations of the gauge algebra, the strength of divergences of Feynman diagrams varies from order to order, in general. However, it is a common feature of all powercounting-renormalizable gauge theories that the degree of divergence D of an arbitrary diagram with a set {n F } of external lines, where {F } = {A 0 , Y, K,K}, is less than or equals toD σ being the canonical dimension of the gauge field a. The case D <D corresponds to theories with superrenormalizable interactions. It can be inferred from Eq. (29) that in the case of σ = 1 (e.g., Yang-Mills theories) the third term in the right hand side of Eq. (28) is zero. Indeed, the only divergent diagram with nK = n Y = 1 in this case corresponds to n A 0 = 1, and turns into zero, since the ghost vertex connected with the externalK-line by the ghost propagator, contains the gauge condition operator F α,a which vanishes upon acting on the rest of the diagram. This is illustrated in Fig.2.
As far as the case σ = 0 is concerned (e.g., R 2 -gravity), there is an infinite number of logarithmically divergent diagrams with nK = n Y = 1 and arbitrary number of external gauge fields. In this case the above argument goes if we confine ourselves by calculation of the gauge invariant part of the device action only 9 .
Finally, from (29) follows that if the φ-vertex were absent, then the remaining term in the right hand side of Eq. (28) would diverge, withD = σ. Whether it does 9 Which is sufficient for determination of the low-energy effective device action. depends on the form of the device-gauge field interaction. Obviously, insertion of a vertex corresponding to this interaction makes a diagram withD = σ convergent if and only if where N a , N ∂ are numbers of the gauge fields entering the vertex, and acting on them derivatives, respectively. This condition is obviously satisfied if the full underlying quantum theory of interacting gauge and matter fields is also power-counting renormalizable. Summing up, the right hand side of Eq. (28) turns out to be zero, thus proving gauge independence of the low-energy effective action of the measuring device.
Discussion and Conclusion.
We have shown that in the case when the quantum propagation of the fields describing measuring device can be neglected, namely, in the low-energy classical limit, the effective equations of device motion turn out to be gauge independent at any order of the loop expansion 10 .
This allows to define in the same limit the gauge independent effective gauge field as the field that enters these equations and couples to the measuring device in the classical fashion. We would like to emphasize that it is purely classical nature of the observables (which are functionals of the φ-fields) due to which the well-known problem of their unambiguous definition [10,11] does not arise in our consideration. So, whether it is possible to extend the definition to higher energies depends on eventual applicability of the classical conceptions contained in the notion of measurement. Now, turning back to the item 3. of the Introduction, it is natural to ask whether the value of the effective gauge field, defined in the manner described above, is one and the same for all measuring devices. It definitely is in the case of infinitesimal device action, considered above. Indeed, in this case account of any possible dependence of the effective gauge field on characteristics of the measuring device would exceed the precision chosen in our discussion. It is not clear, however, whether this is true in the general case of finite disturbances produced in the effective field by the process of measurement. | 2014-10-01T00:00:00.000Z | 1999-05-21T00:00:00.000 | {
"year": 1999,
"sha1": "c0b305af311f25217ab897713072046259c7a3f5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9905152",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c0b305af311f25217ab897713072046259c7a3f5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
9940893 | pes2o/s2orc | v3-fos-license | Effects of cyclosporin A on the development of osteonecrosis in rabbits.
Background Osteonecrosis (ON) of the femoral head is a serious complication in patients who have undergone organ transplantation. Introduction of cyclosporin A has resulted in lower-dosage steroid treatment and a decrease in the occurrence of ON. We examined the effect of cyclosporin A on the development of ON in rabbits. Methods In experiment A, rabbits were given cyclosporin A and 20 mg/kg methylprednisolone acetate. The control group was given 20 mg/kg methylprednisolone acetate only. Experiment B was then performed to mimic the clinical situation in which the use of cyclosporin A and lower steroid doses resulted in a decrease in occurrence of ON. In Experiment C, the effects of treatment with cyclosporin A only on development of ON were examined. 4 weeks after injection, bilateral femora and humeri were examined histopathologically for ON. Results Cyclosporin A increased the incidence of ON in rabbits when given in combination with steroid (p = 0.04). No ON lesions were observed in rabbits treated with cyclosporin A alone. Interpretation Our findings suggest that the clinically reported reduction in occurrence of ON following the use of cyclosporin A is probably attributable to the lower steroid doses used.
Background Osteonecrosis (ON) of the femoral head is a serious complication in patients who have undergone organ transplantation. Introduction of cyclosporin A has resulted in lower-dosage steroid treatment and a decrease in the occurrence of ON. We examined the effect of cyclosporin A on the development of ON in rabbits.
Methods In experiment A, rabbits were given cyclosporin A and 20 mg/kg methylprednisolone acetate. The control group was given 20 mg/kg methylprednisolone acetate only. Experiment B was then performed to mimic the clinical situation in which the use of cyclosporin A and lower steroid doses resulted in a decrease in occurrence of ON. In Experiment C, the effects of treatment with cyclosporin A only on development of ON were examined. 4 weeks after injection, bilateral femora and humeri were examined histopathologically for ON.
Results Cyclosporin A increased the incidence of ON in rabbits when given in combination with steroid (p = 0.04). No ON lesions were observed in rabbits treated with cyclosporin A alone.
Interpretation Our findings suggest that the clinically reported reduction in occurrence of ON following the use of cyclosporin A is probably attributable to the lower steroid doses used. ■ Osteonecrosis (ON) of the femoral head is a serious complication in patients undergoing organ transplantation-including kidney, bone marrow and liver allografts (Landmann et al. 1987, Kubo et al. 1997, Lieberman et al. 2000, Torii et al. 2001). MRI analysis has demonstrated an ON incidence of 21% and 19% for kidney and bone marrow transplantation, respectively (Kubo et al. 1997, Torii et al. 2001).
The precise etiology of posttransplant ON remains unclear. Steroid treatment following transplantation is generally considered to be a cause of ON, as steroid dosage has been found to be correlated to the incidence of ON after organ transplantation (Fink et al. 1998, Kubo et al. 1998). Several different kinds of possible pathogenesis have been suggested for steroid-induced ON by experimental studies, including thrombophilic and hypofibrinolytic coagulation abnormalities and hyperlipidemia (Glueck et al. 1994, Jones JP, Jr 1993, Yamamoto et al. 1997, Miyanishi et al. 2002. Cyclosporin A (CsA) is a potent immunosuppressive drug that is widely used to prevent the rejection of transplanted organs (Powles et al. 1980). The conversion from high-dose steroid treatment to combined CsA and low-dose steroid treatment resulted in a reduction in the incidence of ON (Landmann et al. 1987). We examined the effects of CsA on development of ON in rabbits, alone and together with steroid treatment.
Animals and methods
We used a rabbit model of steroid-induced ON (Yamamoto et al. 1997). No surgery was performed in the rabbits. Administration of a single high dose (20 mg/kg) of methylprednisolone acetate (Upjohn, Tokyo, Japan), simulating a dose of human steroid pulse therapy, reproducibly causes ON lesions in this model. All experiments were reviewed by the Common Ethics Committee for Animal Experiments at our university, and were conducted in accordance with the Guidelines for Animal Experimentation, the Law (no. 105), and notification (no. 6) of the government and the Committee on Ethics in Japan.
Animals 90 adult (i.e. with the growth plate already closed) male Japanese white rabbits (Kyudo, Tosu, Japan) weighing 3.3-3.9 kg were used at the Animal Center of our university and maintained on a standard laboratory diet and water. Their ages ranged from 28 to 32 weeks.
Experimental design
Experiment A. 22 rabbits were given 25 mg/kg/day of CsA (Novartis, Tokyo, Japan) intramuscularly for 2 days (Green et al. 1978, Dunn et al. 1979, Durak et al. 1998, and also a single dose (20 mg/ kg) of methylprednisolone acetate after the second CsA injection. 43 rabbits were given a single dose (20 mg/kg) of methylprednisolone acetate only intramuscularly as a control. Different numbers of rabbits were used for the experimental and control groups due to limitation of the amount of CsA available.
Experiment B. 15 rabbits were given 25 mg/kg/ day of CsA intramuscularly for 2 days, together with a single dose of 8.8 mg/kg methylprednisolone acetate after the second CsA injection. The 43 rabbits given a single dose (20 mg/kg) of methylprednisolone acetate only in experiment A were also used as controls in this experiment.
Experiment C. We examined the effect of treatment with CsA alone on development of ON. 10 rabbits were given CsA (25 mg/kg/day) intramuscularly for 2 days (Green et al. 1978, Dunn et al. 1979, Durak et al. 1998).
Tissue preparation
4 weeks after injection of methylprednisolone acetate (experiments A and B) or the first CsA injection (experiment C), animals were anesthetized with an intravenous injection of pentobarbital sodium (25 mg/kg of body weight) (Abbott Laboratories, Abbott Park, USA) and then killed by exsanguination via aortectomy. For light microscopic examination, both femora and humeri (for a total of 4 bone samples per rabbit) were obtained at the time of death and fixed in a 10% formalin-0.1M phosphate buffer (pH 7.4) for 1 week. Bone samples were decalcified with 25% formic acid for 3 days and then neutralized with 0.35 M sodium sulfate for 3 days. Samples were then cut along the coronal plane in the proximal one-third and axial plane in the distal part (condyle). Lastly, the specimens were embedded in paraffin, cut into 4-µm sections, and stained with hematoxylin and eosin.
Evaluation of osteonecrosis
Whole areas of the proximal one-third and distal condyles of both femora and humeri (for a total of 8 regions) were examined histopathologically for the presence of ON. Diagnosis of ON was made in blinded fashion by 3 authors (KM, TY, TI), based on the diffuse presence of empty lacunae or pyknotic nuclei of osteocytes in the bone trabeculae, accompanied by cell necrosis in the surrounding bone marrow (Yamamoto et al. 1997, Miyanishi et al. 2002. The 3 examiners independently made a diagnosis of ON for each sample without knowing the group from which the sample had come. Rabbits that had at least one osteonecrotic lesion in the 8 areas examined were considered to be rabbits with ON. The numbers of rabbits with ON and number of osteonecrotic regions per rabbit (maximum 8 regions) were determined.
Hematological examination
Blood samples were obtained from fasted rabbits prior to the experiment and at 1, 2, 3 and 4 weeks after injection of methylprednisolone acetate (experiments A and B) or the first CsA injection (experiment C). Hematological evaluations of lowdensity lipoprotein (LDL) and very low-density lipoprotein (VLDL) plasma levels were carried out using the turbidimetric method (Kawai et al. 1978).
Whole blood concentrations of CsA were measured with a fluorescence polarization immunoassay kit (Abbott Laboratories, Abbott Park, USA). Measurements were taken 1 and 2 weeks after methylprednisolone acetate injection in experi-ment A, because we have found that the 1-2 week period after methylprednisolone acetate injection is critical for development of ON in this rabbit model (Yamamoto et al. 1997, Miyanishi et al. 2002.
Statistics
Numbers of ON-positive rabbits were compared using Fisher's exact test (experiment A) or chisquare test (experiment B). Numbers of ON regions per rabbit and hematological data were compared using the Mann-Whitney U-test and unpaired Student's t-test, respectively. Statistical analyses were performed using StatView J-5.0 (SAS Institute Inc., Cary, NC). P-values ≤ 0.05 were considered significant.
Results 5 rabbits died during the experiments. 2 of the rabbits treated with both methylprednisolone acetate and CsA and 3 of the rabbits receiving methylprednisolone acetate alone died 2 weeks after injection of methylprednisolone acetate. These 5 rabbits were excluded from the study. The dead rabbits with combined CsA and steroid treatment all belonged to experiment A.
Histopathological features
Macroscopically, osteonecrosis appeared as yellowish-colored areas. Histologically, ON lesions showed an accumulation of bone marrow cell debris and bone trabeculae with empty lacunae (Figure 1). For experiments A and B, these findings were consistent in all osteonecrotic tissues. All 3 examiners agreed on the diagnosis of ON in every sample. In experiment C, the histological findings for bone marrow cells and bone trabeculae were almost normal in rabbits given cyclosporin A alone. Quantitative assessment of thrombus and lipid embolus formation was sometimes difficult due to variation in size, and was not performed.
Incidence of osteonecrosis
Experiment A. There was an increase in the incidence of ON in rabbits treated with methylprednisolone acetate and CsA relative to rabbits receiving methylprednisolone acetate alone (p = 0.04; Table 1). The number of ON regions per rabbit was 4.4 (SD 1.3) in rabbits receiving methylprednisolone acetate and CsA whereas it was 2.5 (SD 1.6) in those receiving methylprednisolone acetate alone (p < 0.001; Figure 2A). Experiment B. The incidence of ON decreased in rabbits receiving reduced doses of methylprednisolone acetate together with CsA relative to rabbits receiving the original high dose of methylprednisolone acetate alone (p = 0.05) ( Table 2). How- ever, no differences between the two groups were observed in the number of ON regions per rabbit (p = 1; Figure 2B). Experiment C. No ON lesions were observed in rabbits treated with CsA alone.
Hematological examination
In experiment A, the serum levels of LDL and VLDL were higher at 2 weeks in rabbits receiving methylprednisolone acetate and CsA than in those receiving methylprednisolone acetate alone (p < 0.001; Figure 3). There was no statistical difference in LDL and VLDL levels between the two groups at any time points examined, except at 2 weeks. In experiment B, the serum LDL levels were lower at 2, 3, and 4 weeks in rabbits receiving a reduced dose of methylprednisolone acetate and CsA than in those receiving the original dose of methylprednisolone acetate alone (p < 0.001; Figure 4A). The rabbits with a reduced dose of methylprednisolone acetate and CsA also showed lower serum VLDL levels at 1, 2, and 3 weeks as compared to control rabbits (p < 0.001; Figure 4B). In experiment C, LDL and VLDL levels remained unchanged throughout the experimental period ( Figure 5).
A B
Blood concentrations of CsA in rabbits given methylprednisolone acetate and CsA were 225 (SD 15) ng/mL and 167 (SD 13) ng/mL 1 and 2 weeks after methylprednisolone acetate injection, respectively. These concentrations are all within the immunosuppressive range, as determined on the basis of a previous study (Andersen et al. 1994).
Discussion
In this study, the dose of CsA was determined on the basis of previous studies (Green et al. 1978, Dunn et al. 1979, Durak et al. 1998). The dosage of immunosuppressant and duration of treatment used here produced blood drug levels falling within the human therapeutic range 1 and 2 weeks after methylprednisolone acetate injection, which represents a period that is critical for development of osteone- Very-low-density lipoprotein (g/L)
Time (week)
A B crosis in rabbits (Andersen et al. 1994, Yamamoto et al. 1997, Miyanishi et al. 2002. To the best of our knowledge, the effects of immunosuppressive drugs on the development of ON in humans have not yet been fully elucidated. An association between immune complexes and ON development has been demonstrated in rabbits (Nakata et al. 1996). In this respect, immunosuppressants may be protective in ON. On the other hand, ON has been increasingly recognized as an important complication in patients infected with human immunodeficiency virus (Allison et al. 2003), which may suggest a possible stimulatory effect of immunosuppressants on osteonecrosis. To address this question, experiment A was performed under conditions in which the same steroid dose was given to each group.
CsA therapy has been found to be responsible for increased incidence of thromboembolic complications such as renal thrombotic microangiopathy (Guillemain et al. 1990) and thrombus formation in systemic venous vessels (Vanrenterghem et al. 1985). Hyperlipidemia is another frequent complication of CsA treatment following organ transplantation (Colak et al. 2002). Our observation of higher serum levels of LDL and VLDL in CsAtreated rabbits at 2 weeks corroborates this finding. One possible explanation-which could partly account for the increased incidence of ON in rabbits treated with methylprednisolone acetate and CsA in experiment A-may be that CsA enhances a steroid-induced procoagulant and hyperlipidemic plasma state.
It is worth noting that an increase in the incidence of ON was observed only when CsA was given in the presence of steroids; no ON lesions resulted from treatment with CsA alone. One study of the interaction between CsA and steroids has documented reduced clearance of steroids in CsAtreated patients (Langhoff et al. 1985).
Experiment B was carried out in order to mimic the clinical situation in which the use of combined CsA and low-dose steroid treatment ultimately results in a reduction in occurrence of ON (Landmann et al. 1987). The reduced steroid dose used here was determined on the basis of a clinical steroid dose reduction rate reported previously (Landmann et al. 1987). One important limitation of experiment B was that the rabbits given 20 mg/kg methylprednisolone acetate only were not a proper control in the strict sense of the term, since they received a much higher dose of methylprednisolone acetate than the experimental group. However, this was used as a reference point, relative to which the effects of combinations of cyclosporin A and lowdose steroid given here could be evaluated.
In conclusion, we found that cyclosporin A increased the incidence of ON in rabbits when given in combination with steroid. The results suggest that the clinically reported decrease in occurrence of ON following the introduction of cyclosporin A is most likely attributable to the lower steroid doses used.
Contributions of authors
KM, TY, SJ and YI designed the research. KM, TY, TI, AY and GM did the experiment and collected and analyzed the data. KM, TY and TI made the histological examinations. KM wrote the draft manuscript. TY, SJ and YI revised the draft manuscript.
This work was supported in part by a grant from the Nakatomi Foundation, a grant-in-aid for JSPS fellows, and a grant for intractable diseases from the Ministry of Health and Welfare of Japan. | 2018-04-03T00:09:44.178Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "c6d5a96790ad700d9440a197c563d9fcd0ea5fce",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17453670610013042?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c4a58c208c4dffa68567fff9b2e8b09ee1591acf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252213831 | pes2o/s2orc | v3-fos-license | Removal of virus aerosols by the combination of filtration and UV-C irradiation
The COVID-19 pandemic remains ever prevalent and afflicting—partially because one of its transmission pathways is aerosol. With the widely used central air conditioning systems worldwide, indoor virus aerosols can rapidly migrate, thus resulting in rapid infection transmission. It is therefore important to install microbial aerosol treatment units in the air conditioning systems, and we herein investigated the possibility of combining such filtration with UV irradiation to address virus aerosols. Results showed that the removal efficiency of filtration towards f2 and MS2 phages depended on the type of commercial filter material and the filtration speed, with an optimal velocity of 5 cm/s for virus removal. Additionally, it was found that UV irradiation had a significant effect on inactivating viruses enriched on the surfaces of filter materials; MS2 phages had greater resistance to UV-C irradiation than f2 phages. The optimal inactivation time for UV-C irradiation was 30 min, with higher irradiation times presenting no substantial increase in inactivation rate. Moreover, excessive virus enrichment on the filters decreased the inactivation effect. Timely inactivation is therefore recommended. In general, the combined system involving filtration with UV-C irradiation demonstrated a significant removal effect on virus aerosols. Moreover, the system is simple and economical, making it convenient for widespread implementation in air-conditioning systems.
, and as of December 2021, it has been the cause of the death of more than 5 million people worldwide. Previous studies have confirmed that virus aerosol transmission is one of the most prominent transmission pathways for SARS-CoV-2 as well as for other viral infections (Chen, 2021;Xie et al., 2021). Saliva droplets can carry the virus through the air, or they can evaporate into droplet nuclei and remain airborne for a long time as virus aerosols (Dancer et al., 2020;Peters et al., 2020;Santos et al., 2020;Bazant and Bush, 2021). Although it is difficult for these virus aerosols to spread in enclosed buildings through natural ventilation, the use of central ventilation systems allows virus aerosols to spread over longer distances (Correia et al., 2020;Jiang et al., 2021), thus infecting more people. Considering that people are indoors for 90 % of the day and almost all hyper diffusion events occur indoors (Klepeis et al., 2001;Castrillón and De Lasa, 2007), virus aerosols in enclosed buildings need to be addressed to reduce virus transmissions and infection risk.
Traditional techniques and methods for removing microbial aerosol contamination include increased ventilation, chemical disinfection, air filtration, UV irradiation, photocatalysis and thermal inactivation (Berry et al., 2022). Compared with other techniques, UV irradiation and filtration interception are economical and thus common methods for controlling indoor microbial contamination Moreno et al., 2021). Filtration technology can effectively intercept different types of particulate matter, and is thus widely used for removing microbial aerosols from the air . The main filtration mechanisms are interception, inertial collision and diffusion. The factors affecting filtration efficiency include the size and shape of particles, porosity and thickness of the filter material, and the speed of air flow (Majchrzycka, 2014). Commonly used filter materials for filtering microbial aerosols include glass fiber, polytetrafluoroethylene fiber, polypropylene melt-blown nonwoven fiber, and polycarbonate fiber. According to their different filtration efficiencies, the filter materials are classified into different grades. The efficiency of high-efficiency particulate air (HEPA) filters can reach 99.97 % for particles with sizes of 0.3 μm and above, and they are usually used for microbial aerosol filtration in sterile areas (Curiel and Lelieveld, 2014;Raynor, 2016). Numerous studies have shown that filtration materials generally perform well against bacteria, fungi, or viruses, with filtration efficiencies ranging from 80 % to 99.9 % (Majchrzycka, 2014;Zou and Yao, 2014;Jeong et al., 2019).
However, these microorganisms trapped in the filter can rapidly multiply under the proper humidity, temperature, and nutrient conditions (Kemp et al., 2001;Kelkar et al., 2005). The filters may thus act as a source of secondary microbial pollution (Maus et al., 2001). Antibacterial agents (e.g., quaternary ammonium phosphate, polyhexamethylene, and nano silver) added to the filter material can inhibit the growth of microorganisms on the filter, but they are not widely used because they may react with the filter materials (Cecchini et al., 2004;Lee et al., 2010). UV irradiation, while able to inactivate viruses, also harms the skin and eyes. Therefore, researchers have considered installing such devices in air purifiers. However, UV irradiation can only play a limited role owing to the limited space in air purifiers. The average residence time of air inside the purification system is only a few seconds, whereas virus aerosols require a longer period of irradiation exposure for effective inactivation. Therefore, a single round of UV irradiation may struggle to effectively inactivate viruses within air purifiers Moreno et al., 2021).
In this study, aerosol filtration and UV-C irradiation were combined to effectively deal with virus aerosols. Although the two combined technologies have been applied in air purifier equipment prior to this study, the characteristics of filter material interception and UV-C inactivation of virus aerosols were not fully investigated. Especially, the virus's small size makes it harder to intercept by filtration than bacteria, while its health risks are of greater concern during the COVID-19 pandemic. With the right filtration material, the viruses and other microorganisms in aerosols can be captured, thereby providing sufficient time for inactivation via UV-C irradiation, while also inhibiting the reproduction and enrichment of microorganisms on the filter material. Therefore, in this study, f2 and MS2 phages were used as model viruses to investigate the effects of varying parameters of the combined technologies for the removal and inactivation of virus aerosols.
Methods and materials
The bacteriophages f2 and MS2 were used to simulate COVID-19 in this study, and the virus aerosol removal performance by the combination of filtration and UV-C irradiation was investigated. The effects of different filter materials and filtration rates on viral aerosol filtration efficiency, and the effects of UV-C irradiation intensity, exposure time, and initial virus concentration on the inactivation efficiency were discussed. The number of viruses was detected using the double agar plate method. The filtration removal rate and inactivation removal rate were calculated by detecting the virus aerosols and virus concentrations on the filter material before and after filtration, respectively. The filtration and inactivation efficiency of bacteriophage f2 and MS2 was calculated as follows (Eq. (1)): Re N 0 N t where represents the removal efficiency of viruses by filtration or inactivation, is the original concentration of the virus before filtration or inactivation, and is the residual virus concentration after filtration or inactivation.
Filter materials
The four polypropylene melt-blown nonwoven filtering materials used for the bioaerosol tests were provided by the manufacturer Shenzhen China Textile Filters Nonwoven Fabric Co., Ltd. (China). The materials are classed into four filter levels depending on the test standards of GB/T14295-2008 and GB/T6165-2008 (China). We renamed the four polypropylene melt-blown nonwoven filtering materials as PP-1, PP-2, PP-3, and PP-4 based on their filtering levels. These filter materials are widely used in indoor air purification systems, and their detailed information is listed in Table 1. All filters were cut into discs of 6 cm in diameter before use. The size and morphology of filtering materials were observed using scanning electron microscopy (SEM, Hitachi S-4800, Japan).
Virus preparation and assay
The icosahedral bacteriophage f2 and MS2 were selected for the bioaerosols filtration and filter coating experiments, as they are smaller than most microorganisms, with sizes of 20-26 nm, and could thus properly reflect the minimum performance required for filter materials to remove microbial aerosols. Moreover, both phages have the same nucleic acid type as that of SARS-COV-2 (single-stranded RNA) and are smaller than the 60-140 nm size range of SARS-COV-2 (Leung and Sun, 2020;Zhu et al., 2020).
The f2 and MS2 phages and their host Escherichia coli were purchased from the Institute of Health and Environmental Medicine, Academy of Military Medical Sciences, China, and the culture process of phage f2 and MS2 was as follows. Approximately 1 mL of f2 or MS2 phage was placed in the Escherichia coli culture medium and incubated in a shaker at 37 °C for 24 h. After centrifugation and purification at 4000 r/min for 10 min, the supernatant was filtered through a 0.22 μm polyvinylidene fluoride (PDVF) microporous material filter, and the phage filtrate was stored at 4 °C. The f2 and MS2 phages were cultured by the double agar plate method and counted as plaque forming units per mL (PFU/mL). Plaque quantities from 30 to 300 plates were considered to have been accurately counted (Cheng et al., 2014).
Filtration experiment
The experimental devices were composed of three main parts: virus aerosol production, filtration and detection. The first part was an organic glass chamber in which the virus aerosol was introduced by a nebulizer (TK-3, China). The ceiling fans in the organic glass chamber were specifically designed to homogenize the distribution of virus aerosol. The second part was a filter unit to which the test filter material was fixed by a plastic component. The homogeneous virus aerosol in the organic glass containers was carried into the plastic components through a rubber hose. The funnel-shaped internal space of the plastic components was specifically designed to ensure uniform face velocity at the tested filter material and a homogenous coating of aerosolized microorganisms on the filter material. Most virus aerosols are intercepted by the filter material, and a few virus aerosols pass through the filter material to the next collection stage. The third part was a virus aerosol collecting unit. The virus aerosols from the blank and experimental groups were collected by a liquid impact decay biological sampler (AGI-303, China) ( Fig. 1). AGI-303 consists of a glass sampling bottle, bracket, and absorption solution, and is suitable for the sampling of microbial aerosols. Multiple microorganisms in the microbial particle group can be released and evenly distributed in the sampling liquid owing to the air flow and agitation of the sampling liquid during the sampling process. The number of microorganisms in the air can be accurately measured after further culturing. An adjustable flow meter was used to regulate and monitor the flow, thus indirectly controlling the wind speed through the tested filter material. 1-Air; 2-HEPA filter; 3-organic glass chamber; 4-TK-3 nebulizer; 5ceiling fans; 6-virus aerosol; 7-filter material; 8-micro-manometer; 9filter holder; 10-biological sampler (AGI-303); 11-flow meter; 12pump. Fig. 1 Schematic diagram of the experiment for trapping virus aerosol by the filtration material.
Inactivation experiment
The UV-C inactivation experimental device is shown in Fig. 2. First, the air carrying the virus aerosol was passed through the filter material for 1 h to intercept and enrich phage aerosols. The number of phages enriched on the filter material was varied by changing the concentration of the phage solution in the aerosol generator to obtain different concentrations of microbial aerosols. The virusrich filter material was then cut into two sections, one piece as a control group that was placed in an opaque box without a UV-C lamp, and the other one as an experimental group placed in an opaque box equipped with a UV-C lamp. Then, a radiometer (UVC-254, Japan) was used to measure the irradiation intensity on the filter material in the experimental group, which was adjusted by changing the distance between the filter film and UV-C lamp. The filter materials of the control and experimental groups were taken out at the allocated time, eluted by PBS eluent, and detected by the double agar plate method. The inactivation rate was then calculated.
Detection of virus coated on filters
The virus-rich filter samples-both irradiated and nonirradiated-were cut into several smaller fragments, which were then added to a glass test tube filled with 10 mL PBS eluent and shaken for 60 s to elute the viruses coated on the filter material (Pigeot-Remy et al., 2014). Then, the double agar plate method was used for culturing and counting the concentrations of the viruses.
Results and discussion
3.1 Scanning electron microscope (SEM) analysis The SEM images of various filter materials are shown in Fig. 3. It can be seen that the fibers of several filter materials were smooth and evenly distributed. In the typical structure of melt-blown nonwoven fabrics, circular staggered fibers form a fiber web. It can also be seen from Fig. 3 that the diameter of the PP-1 fiber was relatively uniform, with a coarse fiber diameter of approximately 20 μm, and a fine fiber diameter of approximately 10 μm. In contrast, the diameter differences in PP-2 and PP-3 fibers were more obvious, indicating that the PP-1 material had higher filtration precision. The diameter range of the PP-2 fibers was 1-20 μm, and that of PP-3 fibers was 1-40 μm. The diameter of PP-4 fibers was the thickest, with that of the thickest fibers exceeding 100 μm. The SEM micrographs of filter materials after virus filtration are shown in Fig. 4. It can be seen that a large number of virus aerosol droplets were attached to the fibers of the PP-1 filter materials, and that the size of most droplets was between 0.5 and 3 μm.
3.2 Wind resistance of filter material Fig. 5 shows the change in wind resistance caused by the 1-Light tight box; 2-UV-C lamp; 3-radiometer; 4-height adjustment platform; 5-illuminometer probe; 6-filter material with enriched virus. Fig. 2 Schematic diagram of the experimental device for inactivating virus by UV-C irradiation. four PP materials at different filtration speeds. The results showed that with the filtration speed range of 1-10 cm/s, the material wind resistance was proportional to the filtration speed, presenting a linear relationship. Detailed parameters are shown in Table 2. Among the four PP materials, PP-1 had the fastest pressure drop rate with an increase in filtration speed, while PP-4 had the lowest rate. This was related to the degree of precision of the materials, where the more precise materials experienced a greater pressure drop.
The wind resistance of filter material was the key factor affecting the air volume that passed through. The lower the wind resistance, the higher the ventilation volume. In a previous study (Majchrzycka, 2014), the air flow resistance demonstrated by polylactic acid (PLA) and PLA modified with Bioperlite (PLA + Bioperlite) was 202-322 Pa. The maximum wind resistance of the filter materials selected in the present study was only 40 Pa, which is lower than the wind resistance of most highefficiency filter materials.
Distribution of virus aerosol particle size
The filter material selected in this experiment was mainly used to remove microorganisms in the air through filtration and retention. As for the filtration technology, the particle size of microbial aerosols plays a decisive role in filtration performance. Therefore, the particle size and particle size distribution of microbial aerosols need to be determined before testing the performance of filter materials material. In this study, f2 and MS2 phage viruses, which are commonly used, were selected to simulate microorganism aerosols of the smallest sizes in the air, and the filtration performance of different filter materials was investigated. The initial concentration of 2.0 × 10 8 PFU/mL phage solution was prepared and added into TK-3 microbial aerosol generator. Of this, 0.3 mL phage solution was converted into microbial aerosol every minute for 10 min. The particle size distribution of microbial aerosols was measured by sampling with a six-stage Anderson sampler (JMT-6, China) for 1 min, as shown in Fig. 6.
The results showed that the minimum particle size of MS2 and f2 phage aerosols was less than 1 μm, and the maximum particle size was more than 7 μm. The range was mainly concentrated within 0.65-3.3 μm, with particle sizes of 0.65-1.1 μm accounting for 27.7 %, 1.1-2.1 μm for a maximum of 37 %, and 2.1-3.3 μm for only 14.7 % of all particles. Pigeot-Remy et al. (2014) measured the particle sizes of microbial aerosols, which were mainly distributed in the range of 0.65-1.0 μm. In our study, the particle size distribution range of virus aerosols was much wider and larger, which may be because of the use of different microbial aerosol generators or different measurement methods. Furthermore, the abundance of particles of other sizes was low, which differed from the distribution structure characteristics of the f2 and MS2 phages at 20-26 nm. This phenomenon may be because of the condensation of the phages into virus aerosols when they are encapsulated in small droplets during the atomization process, which is similar to the virus aerosols produced by the human body through sneezing (Kalogerakis et al., 2005).
Effects of different filter materials on the interception performance of virus aerosols
The performance of different filtration materials on f2 and MS2 virus aerosols is shown in Fig. 7 (the filtration rate was 5.3 cm/s, and the initial concentration was 2.0 × 10 5 PFU/m 3 ). The results showed that the materials with high filtration precision demonstrated a higher interception rate for virus aerosols. The interception performance of PP-1 filter material for f2 microbial aerosols was more than 2 log, and the interception performance for MS2 microbial aerosols was up to 3 log. The performance of the PP-4 material presented the lowest interception rate of less than 1 log for two phages. This means that highprecision commercial filter materials can effectively remove all kinds of microorganisms from the air.
A previous study (Majchrzycka, 2014) determined the filtration efficiency of polylactic acid (PLA) and PLA modified with Bioperlite (PLA +Bioperlite) for S. aureus and P. aeruginosa is to be 94.96 %-99.34 %. Another result showed that the filtration efficiency of most mask filter materials for particles with sizes of 0.37-20 μm was more than 90 %, and the filtration efficiency of N95 masks was more than 99 (Zou and Yao, 2014). This matches the efficiency of the PP-1 filter material for the virus aerosols used in our study indicating that the PP-1 filter material has adequate filtration efficiency to meet the protection requirements (Technical committee CEN/TC 79 "Respiratory protective devices", 2009).
Effects of filtration rates on the interception of virus aerosols
To guide the design of a reasonable filtration rate for microbial aerosol interception, the influence of filtration rate on microbial interception by filter materials should be investigated. Herein, the interception performance of the PP-1 filter material for f2 and MS2 phage aerosols that had initial concentrations of 2.0 × 10 5 PFU/m 3 was tested at the filtration rates of 1, 3, 5, and 7 cm/s, as shown in Fig. 8.
The filtration mechanism of HEPA filter includes interception, precipitation, impact, diffusion, and electrostatic adsorption, etc. (Curiel and Lelieveld, 2014). Within the filtration rate range of 1-7 cm/s, the interception performance of the filter material first increased and then decreased with an increase in filtration rate, and the PP-1 material led to the greatest interception of f2 phages at a filtration rate of 5 cm/s. The reason for this phenomenon is most likely because of the strengthened interception performance by an increase in inertial collision at higher filtration speeds, although in this situation, the interception performance via free diffusion becomes weaker. The best interception performance occurs when the two interception modes work together.
It is noteworthy that the interception contribution of inertial collision and free diffusion is affected by filter material performance and particle size distribution, and the optimal interception speed can be much greater or lower than 5 cm/s. However, according to the experimental results of this study, when high-precision filter materials such as PP-1 are used to intercept virus aerosols, the filtration rate should be as close to 5 cm/s as possible.
Effect of UV-C intensity on virus aerosol inactivation process
The microorganism concentrated on the filter material pose a serious threat to indoor air quality, so it is necessary to inactivate them on the filter material. In this experiment, a 254 nm UV-C lamp with good sterilization effect and reliable operating duration was selected to conduct the experiment, to investigate the influence of irradiation intensity and time on sterilization, and to optimize the experimental parameters. Phage aerosols with initial concentrations from 2.0 × 10 7 to 2.0 × 10 8 PFU/m 3 were prepared and continuously passed through a material at a filtration rate of 5.3 cm/s for 1 h, so that the number of viruses concentrated on each material was from 5.0 × 10 6 to 5.0 × 10 7 . The duration of UV-C exposure for the inactivation of f2 and MS2 phages was set to 30 min, which demonstrated an inactivation trend varying with UV-C irradiation intensity, as shown in Fig. 9. SPSS 19.0 software was used to curve fit the pair values of f2 and MS2 viruses that survived the 30 min of irradiation with various irradiation intensities. It was found that the fitting was the best when using an exponential function. The exponential fitting results of f2 and MS2 viruses were as follows (Eqs. (2) and (3)): log (MS 2) = 5.1e −0.62x , R 2 = 0.994, (3) where x is the variable of irradiation intensity, and the unit is mW/cm 2 . The log value of viable viruses decreased with an increase in UV-C irradiation intensity, but the rate of decline decreased, indicating that the improvement in the inactivation effect was limited with a further increase in irradiation intensity.
The survival of f2 and MS2 phages decreased with increasing irradiation intensity, and the slope of the survival number also decreased with increasing irradiation intensity. The inactivation effect of UV-C irradiation on f2 phages was better than that on MS2 phages. The difference in the resistance of different viruses to disinfection may result from the complexity of protein capsids and nucleic acids (Thurston-Enriquez et al., 2005;Tseng and Li., 2006). The protein capsid of the MS2 phage was likely more resistant to UV irradiation than that of the f2 phage, which allowed the MS2 phage to have a greater resistance to UV irradiation. Therefore, the residual f2 phage was lower than the MS2 phage concentration under the same conditions. Typically, bacteria have stronger resistance to UV-C irradiation than viruses (McDonnell and Burke, 2011). As demonstrated in our study, virus aerosols were inactivated faster than bacterial aerosols under UV-C irradiation, as more than 99 % of f2 and 90 % of MS2 phages retained by PP-1 filters were inactivated within 0.5 h at an irradiation dosage of 9.0 × 10 3 mJ/cm 2 . However, in the literature, it has been demonstrated that 99 % of aerosol bacteria retained by AC filters (polyester fibres with an inner activated charcoal layer) could only be inactivated within 4 h at the dosage of 5.18 × 10 4 mJ/cm 2 (Pigeot-Remy et al., 2014).
Inactivation effects of UV-C on virus aerosols based on exposure time
In this experiment, a microbial aerosol with an initial concentration of 2.0 × 10 7 -2.0 × 10 8 PFU/m 3 was prepared and continuously passed through a filter material at a filtration rate of 5.3 cm/s for 1 h, so that the number of microorganisms concentrated on each material was 5.0 × 10 6 -5.0 × 10 7 PFU. Fig. 10 showed the trend of the inactivation ability of UVC-254 against f2 and MS2 phages with an increase in irradiation time at different irradiation intensities. Under the same UV-C irradiation time but increased radiation intensity, the survival of f2 phages decreased. This can be explained by not only the increased molecular damage of viral nucleic acid, but also the irradiation at a higher intensity to reach further inside the filter material, thus exposing more virus aerosols to effective UV-C irradiation.
Although the number of viable f2 phages decreased with an increase in UV-C exposure duration from 0 to 30 min, under the same light intensity, there was no further increase in inactivation from exposure after 30 min. This is likely because while any f2 phages attached to the surface of the filter material were inactivated by UV-C irradiation within 30 min, the virus aerosol particulates that had entered the inner filter material and could not be exposed to UV-C irradiation, thus allowing their survival for much longer durations. Prior studies have shown that the amount of bacteria coated on an AC filter does not decrease after 6 h of UV-A irradiation or after 4 h of UV-C irradiation, indicating a similar effect where the bacteria in shallow layers are sufficiently inactivated within a few hours, but those in the inner activated charcoal layer of the AC filter cannot be exposed to a sufficient UV-C dose to be permanently damaged.
However, bacteria retained by glass fiber filters could be completely inactivated, which can be explained by glass fiber's better light transmittance and thinness. At 0.42 mm thick, glass fiber filters are only 1st/6th the thickness of an AC filter, thus sufficiently exposing all bacteria to UV-C radiation (Pigeot-Remy et al., 2014). In this study, the PP-1 filter comprised a polypropylene electret melt-blown nonwoven material with a thickness of 0.5 mm. The experimental results showed that, just as for the AC filter material in the aforementioned literature, there was a limit to inactivation under UV-C irradiation. This may be because the internal structure and light transmittance of the PP-1 filter material are different from those of glass fiber filters.
Recently, another study showed that more than 5-log (99.999 %) of E. coli coated on a glass slide could be inactivated directly exposed to UV-C irradiation at a dosage of 30 mJ/cm 2 (Schnell et al., 2021). In our study, however, less than 50 % of f2 phages were inactivated by UV-C irradiation at a dosage of 90 mJ/cm 2 . This further confirms that the filter material has a shielding effect on the virus. The fiber network structure of the filter material shielded the virus aerosols from UV-C irradiation, causing the UV-C intensity received by f2 aerosols distributed in the filter material to be lower than that reaching the surface of the filter material. Therefore, to avoid high virus enrichment in the internal filter material, we need to use UV-C irradiation to inactivate the virus intercepted on the surface of the filter material, and choose materials with good light transmittance, such as glass fiber. Despite these improvements, it is still inevitable that some viruses will enter the filter material, but these could be inactivated by adding UV-C irradiation on the other side of the filter. To inactivate microorganisms in the filter material, the irradiation time and intensity should be appropriately increased to ensure a sufficient dosage of irradiation for effectively inactivating microbial aerosols.
Effects of the initial phage concentration on inactivation process
The effect of the initial virus concentration on the removal effect is shown in Fig. 11. The results showed that as the concentration of f2 phages on the filter gradually increased, the inactivation efficiency by UV-C irradiation decreased (the total number of inactivated f2 phages increased). This may be because, at a higher initial concentration, phages on the outer surface had a shielding effect on the inside phages, resulting in a lower proportion of inactivated viruses. This indicated that the UV-C irradiation frequency needs to be increased to prevent the surface of the filter materials from being clogged by an excessive enrichment of viruses.
Conclusions
The combined filtration and UV-C irradiation proposed in this study can dynamically, continuously, and efficiently remove microbial aerosols. And there is an optimal ventilation speed for virus aerosol filtration. The log value of the surviving viruses trapped by the filter material decreased exponentially with an increase in UV-C irradiation intensity, which indicated that an excessive increase in irradiation intensity has little effect on improving the inactivation efficiency. Similarly, UV-C has an inactivation threshold for virus aerosols trapped by the filter material, which can not be inactivated even by increasing the irradiation. Moreover, the higher the initial concentration of viruses trapped by the filter, the lower the inactivation efficiency, mainly because of the blocking effect of virus aerosols. The air that is moved through indoor air ventilation systems can be effectively, efficiently, and economically purified via a filter design that uses a light-transmissible material, provides an adequate ventilation speed, and exposes the virus aerosols to UV radiation at a proper dosage for a sufficient duration. Such air filtration is essential for the development and application of indoor air pollution control technology. | 2022-09-14T13:40:37.890Z | 2022-09-07T00:00:00.000 | {
"year": 2022,
"sha1": "bdb046a3edd6f9e73d018ee90396d3c29a3b61b6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "bdb046a3edd6f9e73d018ee90396d3c29a3b61b6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
3928173 | pes2o/s2orc | v3-fos-license | How to precisely measure the volume velocity transfer function of physical vocal tract models by external excitation
Recently, 3D printing has been increasingly used to create physical models of the vocal tract with geometries obtained from magnetic resonance imaging. These printed models allow measuring the vocal tract transfer function, which is not reliably possible in vivo for the vocal tract of living humans. The transfer functions enable the detailed examination of the acoustic effects of specific articulatory strategies in speaking and singing, and the validation of acoustic plane-wave models for realistic vocal tract geometries in articulatory speech synthesis. To measure the acoustic transfer function of 3D-printed models, two techniques have been described: (1) excitation of the models with a broadband sound source at the glottis and measurement of the sound pressure radiated from the lips, and (2) excitation of the models with an external source in front of the lips and measurement of the sound pressure inside the models at the glottal end. The former method is more frequently used and more intuitive due to its similarity to speech production. However, the latter method avoids the intricate problem of constructing a suitable broadband glottal source and is therefore more effective. It has been shown to yield a transfer function similar, but not exactly equal to the volume velocity transfer function between the glottis and the lips, which is usually used to characterize vocal tract acoustics. Here, we revisit this method and show both, theoretically and experimentally, how it can be extended to yield the precise volume velocity transfer function of the vocal tract.
Introduction
The vocal tract transfer function, i.e., the complex frequency-dependent ratio of the volume velocity (or alternatively sound pressure) at the lips to the volume velocity through the glottis, is widely used to characterize the acoustics of the vocal tract. It contains the information about the frequencies and bandwidths of the formants (resonances), which are of primary importance in many studies. Besides the formants, most transfer functions contain additional PLOS information in terms of close pole-zero pairs, which are caused by side cavities like the piriform sinus, the vallecula, a cavity between tongue base and epiglottis, interdental spaces, or the nasal cavity [1][2][3]. The measurement of the complete transfer function of real vocal tract geometries with a bandwidth of up to at least 6 kHz (speech range) and a high signal-to-noise ratio is therefore of paramount interest. The only known direct method to measure the vocal tract transfer function in vivo requires an external broadband excitation of the vocal tract with a transducer placed in the vicinity of the larynx, while measuring the sound pressure radiated from the mouth [4][5][6]. Because the sound of the source must pass the tissue of the neck to excite the air in the vocal tract, the source must be firmly pressed against the neck, which can be inconvenient. Furthermore, depending on the subject, the damping by the tissue may be so strong that the signal-to-noise ratio of the recorded signal is not high enough to be useful.
A more convenient direct method to determine the formant frequencies (but not the full volume velocity transfer function) excites the vocal tract with a volume velocity source close to the mouth opening and simultaneously measures its sound pressure response right next to the source [7][8][9]. The quotient of the recorded pressure and the emitted volume velocity is the impedance of the vocal tract in parallel with the radiation impedance, the peaks of which correspond to the radiation-loaded vocal tract resonances [7]. However, it is very difficult to construct a volume velocity source with a flat response over a sufficiently high bandwidth, and the radiation from the mouth is physically disturbed by the source and the microphone. Furthermore, the input impedance differs from the volume velocity transfer function that is usually used to characterize vocal tract acoustics.
A relatively new method to obtain a detailed vocal tract transfer function of a sustained speech sound consists of measuring the vocal tract shape using 3D magnetic resonance imaging (MRI), segmenting the vocal tract shape from the MRI data, printing a 1:1 physical model of the shape using a 3D printer, and measuring the transfer function of the physical model [2,[10][11][12]. The advantage in contrast to measurements in humans is that the printed models have no limitations with respect to the placement of sound sources and microphones. Two approaches have been described to obtain the transfer function of these models.
One approach is to excite the models with a broadband sound source at the glottis and measure the sound pressure radiated from the lips [11][12][13]. Here, the sound source should ideally be a volume velocity source with an infinite output impedance, which is hence independent of the vocal tract load as assumed by the source-filter theory of speech production [14]. However, constructing and calibrating a broadband volume velocity source with a flat frequency response is intricate. Some studies used a loudspeaker or horn driver connected to an impedance matching horn with a small ( 4 mm) annular aperture at the distal end of the horn, which is attached to the glottal end of the vocal tract model [13,15]. Alternatively, the horn is omitted and the speaker is directly attached to a connector plate with a small hole [12]. For a good approximation of a volume velocity source, the hole must be so small that its acoustic resistance is much higher than the highest input impedance of the models. However, the high resistance of the hole and the cavity resonances of the horn can affect the loudspeaker behavior in an unpredictable way. Therefore, the usable bandwidth of this type of source is typically rather limited. For example, Speed et al. reported an upper band limit of 4 kHz, i.e., the frequency of the first marked zero that could not be equalized due to the limited dynamic range of the loudspeaker [13].
Similar to this is the use of an in-ear headphone as a glottal source [11]. However, to our knowledge, the effect of the vocal tract load on the acoustic excitation of such a headphone has not been examined yet. Another method to produce a well-defined glottal volume velocity is based on a calibrated impedance head connected to the glottal end of the model [16]. This technique allows high precision and dynamic range over a wide frequency range, but requires a sophisticated calibrated impedance head with three measurement microphones.
The other general approach to measure the transfer function is to excite the vocal tract model with an external sound source P s (ω) in front of the lips and measure the sound pressure P 1 (ω) inside the model at the glottal end [1,2,10] as illustrated in Fig 1A. This method avoids the intricate problem of constructing a suitable volume velocity source and only requires an ordinary wideband loudspeaker and a microphone. Kitamura et al. [10] argued that where H(ω) = U 2 (ω)/U 1 (ω) is the volume velocity transfer function between the volume velocities U 1 (ω) through the glottis and U 2 (ω) through the lips (see Eqs AÁ4 and AÁ10 in Kitamura et al. [10]), which is usually used to characterize vocal tract acoustics, Z r (ω) is the radiation impedance, and Z 0 is the characteristic impedance of a plane wave. Therefore, if the frequency response P s (ω) of the source is assumed to be independent of frequency, P 1 (ω) is close to H(ω) in terms of formant frequencies, but the spectral tilt is different because Z r is monotonically increasing with frequency. So far, the magnitude of this tilt and hence the deviation of P 1 from the true volume velocity transfer function has not been examined. In principle, the spectral tilt could be compensated by an adapted source. However, this presupposes the exact knowledge of the source characteristics, i.e., one must be able to quantify the behavior of the source coupled with the vocal tract models. Since in many cases the sources are not independent of the model, this compensation would have to be explicitly determined for each configuration. According to Eq (1), it seems likely that the deviation of P 1 (ω) from H(ω) depends on the model geometry, because the radiation impedance Z r (ω) depends on the mouth aperture and the shape of the lips. The purpose of this paper is to examine the extent to which P 1 differs from the true volume velocity transfer function for different vocal tract shapes and to propose an extension to Kitamura's method that allows the precise determination of the volume velocity transfer function. Therefore, we present an alternative description of the measurement situation in terms of an acoustic circuit model. The analysis of this circuit model shows that the precise volume velocity transfer function can be obtained with an additional sound pressure measurement in front of the closed lips of the vocal tract model (without the need for an actual acoustic flow measurement). This method is used to measure the volume velocity transfer functions of four physical vocal tract models and the results are compared to finite element simulations of the same models.
We must emphasize that the proposed method cannot be used to directly measure the volume velocity transfer function of the real vocal tract in vivo. Instead, the vocal tract performing the articulation of interest has to be scanned in an MRI scanner, and the vocal tract shape has to be segmented from the MRI data and printed as a 3D object. Despite this limitation, there are a range of applications for the proposed method. On the one hand, certain articulatory strategies during the production of phones in speech and singing can be precisely associated with changes in the acoustic transfer function. This may help to examine how professional singers tune the acoustic properties of their vocal tract when they sing at different pitches [11], or how we adapt vocal tract acoustics for different voice qualities (e.g., between spoken and shouted speech). On the other hand, the detailed and precise volume velocity transfer functions for realistic 3D vocal tract shapes can serve as ground truth for the validation of methods to transform 2D or 3D vocal tract models to plane-wave acoustic tube models in articulatory speech synthesis (e.g., [17,18]). The transformation from 3D vocal tract models to low-dimensional acoustic tube models is necessary because full 3D acoustic simulations are far too slow for articulatory speech synthesis in real-time.
Theory
In this section we show that the volume velocity transfer function of the vocal tract, i.e., the ratio of the volume velocity U 2 (ω) through the lips to the volume velocity U 1 (ω) through the glottis, corresponds exactly to a ratio of two pressures P 1 (ω) and P 3 (ω), which can be easily measured when the vocal tract is externally excited with a volume velocity source U s (ω) as in Fig 1A and 1B.
The pressure P 1 is measured at the glottis while the mouth is open and the pressure P 3 is measured right in front of the closed lips.
We start by modeling the measurement situation in Fig 1A with the general acoustic circuit in Fig 2A. Here, the vocal tract is represented in terms of a two-port network where the input (= glottal) pressure P 1 and the input volume velocity U 1 are related to the output pressure P 2 and the output volume velocity U 2 by a 2 × 2 transmission matrix M(ω) = [m ij (ω)] as follows: The transmission of sound between the mouth opening and the location of the external sound source is correspondingly modeled by a two-port network with the transmission matrix denote the joint transfer matrix between the glottis and the external source. Due to the principle of reciprocity [19], det M = 1, det N = 1 and det O = 1. Considering that U 1 = 0, the sound pressure P 1 measured at the glottis can be expressed as a function of volume velocity U s at the sound source: We now consider the case where the mouth of the vocal tract is closed with a plate and the sound source is at the same position as before, as shown in Fig 1B. In this case, the volume velocity U 2 through the mouth of the model is zero. The pressure P 3 that is measured in front of the closed lips is now an open-circuit pressure, as shown by the equivalent circuit in Fig 2B. It can be expressed as a function of the volume velocity U s at the sound source as follows: From Eqs (3) and (4) we can form the ratio P 1 /P 3 : The quotient n 11 /n 21 on the right-hand side of Eq (5) equals the input impedance P 2 /U 2 of the two-port network for the "environment" in Fig 2, which is equivalent to the radiation impedance Z r of the vocal tract. This can be proven by considering the two-port network described by the transmission matrix N(ω), which represents the exterior space between the vocal tract and the external sound source (loudspeaker). The equations to describe the transfer characteristics are as follows: For the case of an inactive loudspeaker, one gets U s = 0, and the ratio can be derived. It should be noted that this equation is only an approximation that presumes that the presence of the loudspeaker does not essentially change the exterior space. This assumption holds the smaller the loudspeaker and the greater the distance from the loudspeaker to the resonator. Combining Eqs (5) and (8), we obtain This pressure ratio is exactly the desired volume velocity transfer function H = U 2 /U 1 of the vocal tract, which is easily verified with the second equation in (2) by setting P 2 = U 2 Z r . Note that for both cases shown if Fig 2A and 2B, U s is assumed to be unaffected by the configuration (open or closed mouth) of the vocal tract model under test, because the whole vocal tract model constitutes a small reflecting surface at a certain distance from the loudspeaker in an otherwise open acoustic space. It is also worth noting that the measurement situation depicted in Fig 1A closely resembles the hearing situation when a sound wave emitted from a volume source U s hits the human ear. In this analogy, the vocal tract corresponds to the ear canal, and its glottal end corresponds to the eardrum. The hearing situation has been well studied in the context of binaural hearing [20] and can be adapted to obtain the same results as we did above.
Preparation of the physical vocal tract models
To test the theory above, four physical vocal tract models were created. Three of the models represent the vocal tract shapes for the vowels /a/, /u/ and /i/ produced by a 24-year-old male native German subject. The participant was a singing student at the Voice Research Laboratory, Hochschule für Musik Carl Maria von Weber Dresden, with whom one of the co-authors cooperates. The MRI images were taken in 07/2015. Data aquisition within this study was approved by the ethical review committee of the medical faculty "Carl Gustav Carus" of the TU Dresden (EK153042011). After having been informed about risk and procedures, the participant provided written consent. The shapes were obtained from MRI data of the vocal tract according to a procedure presented in detail before [21,22]. In brief, the subject produced each of the two vowels for about 12.1 s, while his vocal tract was scanned using a 3 T MRI machine (Magnetom Trio Trim, Siemens Medical Solutions, Erlangen, Germany). The MRI was performed with a 12-element-head-neck-coil using a 3D volume-interpolated-breathholdexamination sequence with 1.22 ms/4.01 ms (echo time/repetition time), flip angle 9˚, a fieldof-view of 300 × 300 mm 2 , a matrix of 288 × 288 pixels 2 , 52 sagittal slices and a slice thickness of 1.8 mm.
The vocal tract cavities in the MRI data were segmented using IPTools [23], slightly smoothed, and exported as triangle meshes representing the vocal tract walls. The termination of the vocal tract models at the lips was approximated by a plane parallel to the coronal plane. The anteroposterior position of this plane was set to the place where the vertical distance between the midsagittal contours of the upper and lower lips reached its minimum. The lateral gaps of the vocal tract in the region of the lips between the corners of the mouth and the termination plane were manually closed during the segmentation.
Because the teeth are invisible in MRI, they were separately reconstructed as triangle meshes by scanning plaster models of the subject's mandibular and maxilla using a NextEngine desktop 3D laser scanner. All triangle meshes were then voxelized with a voxel size of 0.25 × 0.25 × 0.25 mm 3 using binvox [24] (www.patrickmin.com/binvox/) and merged into a single voxel model using 3DSlicer [25] (www.slicer.org). The teeth were positioned relative to the vocal tract cavities by careful visual inspection. The merged voxel models for /a/, /u/ and /i/ were then converted back into triangle meshes using 3DSlicer. The free software package Meshlab (www.meshlab.sourceforge.net) was then used for adaptive mesh simplification and extrusion of the surfaces to create vocal tract walls with a thickness of 3 mm. Finally, the programs netfabb Basic (www.netfabb.com) and ParaView (www.paraview.org) were used to repair defects in the triangle meshes and separate the models into two halves suitable for 3D printing. The model halves were printed with a 3D printer (ULTIMAKER 2, www.ultimaker. com) using polylactic acid (PLA) material, and the two halves of each /a/, /u/ and /i/ were conglutinated (S1 File). In addition to the realistic models for /a/, /u/ and /i/, a uniform tube of 170 mm length and 27.6 mm inner diameter was created as a fourth model (denoted as / e / in the following) and printed in one piece with the 3D printer.
All four models were terminated at the glottal end with a uniform endplate of 3 mm thickness and a hole with 10 mm diameter to allow inserting the measurement microphone with a rubber adapter (Figs 1A and 3A).
Finally, the printed models were covered with plaster of a thickness of about 1 cm to increase the mass of the walls and avoid sound radiation from the model surfaces.
Measurement of the transfer functions of the physical models
For each of the four physical models, the volume velocity transfer function was obtained according to the theory outlined in Section II.A by two successive sound pressure measurements (P 1 and P 3 ) with the setups in Fig 1A and 1B. During both measurements, the model was excited by an external sound source U s (VISATON speaker, type FR 10-8 Ohm in a custom-made cylindrical enclosure) producing an exponential sweep with a power band from 100-10000 Hz and a duration of 21 s according to the method by Farina [26]. The sound source was located 25 cm in front of the mouth opening to prevent near-field effects on the model. The pressure P 1 was recorded with a 1/4" measurement microphone (type MK301E/ MV310, www.microtechgefell.de) inserted into the glottal end of the model so that the microphone membrane was flush with the upper surface of the "vocal folds". For the measurement of P 3 , the mouth opening of the model was closed with a stiff plate of 3 mm thickness and the size of the mouth, fixed to the model with two-sided tape. P 3 was measured right in front of this plate using a probe microphone (ER-7C, www.etymotic.com). For each model, the transfer function was calculated as H(ω) = P 1 (ω)/P 3 (ω). In addition to P 1 and P 3 , the free-field sound pressure P ref (ω) produced by the loudspeaker was measured in the absence of the model at the position where the mouth was. All measurements were conducted in an anechoic chamber at a room temperature of 20˚C.
Calculation of the transfer functions with finite element models
For comparison with the physical measurements above, the volume velocity transfer functions of the four vocal tract models were calculated using the finite element method (FEM). The FE model creation and the numerical simulation were performed similarly to Fleischer et al. [22]. Accordingly, the volume meshes (S2 File) of the four vocal tract shapes in Sec. II.B were created from the surface representations using the software Gmsh [27]. The mesh for the vowel /a/ had a mean element size of 1.91 mm and 101,240 degrees of freedom (DOF), the mesh for /u/ had a mean element size of 2.19 mm and 78,568 DOF, the mesh for /i/ had a mean element size of 1.68 mm and 124,650 DOF, and for / e /, the mesh had 31,048 DOF and an element size of 3.01 mm. Note, that the geometrically simple / e / has a slightly greater element size, because of they are no tiny details. In order to validate the numerical results, the polynomial degree of the shape functions was varied (for 2nd order polynomials the DOF increased to about 230,000 for / e / and up to 900,000 for /i/). The comparison of the simulation results for first and second order polynomials showed that the chosen element size was sufficient for all finite element models in the investigated frequency range even with linear shape functions.
The acoustic simulation was performed with the open-source software FEniCS (http:// fenicsproject.org; [28]) based on the Helmholtz equation where P is the complex-valued sound pressure as a function of the positionx and the angular frequency ω, κ = ω/c is the wave number, and c = 343 m/s is the speed of sound for a temperature of 20˚Celsius. The particle velocity Vðx; oÞ is related to the sound pressure by rP ¼ À jo%Ṽ , where % = 1.20 kg/m 3 is the ambient density for a temperature of 20˚Celsius. For the computation of the volume velocity transfer function, the following boundary conditions were applied: Here, P glottis is the pressure on the model surface region representing the glottis, P lips is the pressure on the surface region representing the lip opening, and P wall is the pressure on the surface of the vocal tract walls (see Fig 3B and 3C for the individual regions). Furthermore,ñ is the outward normal vector of the mesh surface and the wall impedance Z wall was empirically set to 500Á%c = 205,800 kg/(mÁs) 2 for appropriate damping. Since the wall impedances of the printed 3D models are not known, the simplest model (/ e /) was used to adjust the wall impedance in such a way that the transfer function was well approximated but did not change too much in comparison to the solution for the hard-walled model. The background is that for this simple model-due to small reflections within the model-the wall damping must have a small influence. The estimated value was then adopted for the other models. It is conceivable that the wall impedance depends on location and frequency (see [22]), but in order to limit the calculation effort, a constant value was used. The radiation impedance Z r was set to that of a rigid piston with a radius r lips ¼ acting into an infinite baffle [22,29]. The acoustic pressure P lips at the lip opening was determined in the center of the area representing the lip region. Based on these boundary conditions, the transfer function H FEM (ω) = U lips (ω)/U glottis (ω) was calculated, where U lips = A lips Á V lips , U glottis = A glottis Á V glottis , and A lips and A glottis are the crosssectional areas of the lips and the glottis, respectively. For each of the models, the transfer function was calculated with a frequency resolution of 3 Hz from 0 to 6 kHz, taking up to 8 h time per model on a standard desktop computer. Fig 4A-4D show the pressure P 1 measured at the glottis, the pressure P 3 measured right in front of the closed lips, and P ref measured without the model in front of the loudspeaker for each /a/, /u/, /i/ and / e /. Here, it can be seen that the P 1 spectra resemble typical volume velocity transfer functions for these vowels as claimed by Kitamura et al. [10]. However, we also see a clear drift between the spectra for P 3 and P ref with differences of up to 10 dB at 6 kHz. Fig 5 shows that the drifts are generally similar for all four models, but that they differ in detail.
Results and discussion
The drift differences between the models are smaller than we initially expected, because the radiation impedance Z r in Eq (1) suggests that the drift depends on the mouth aperture (which varies from 0.44 cm 2 for /u/ to 5.98 cm 2 for / e / in our models). On the other hand, the similarity is less surprising in the context of the actual measurement setup, because P 3 was measured in front of the closed mouths of the models. Fig 6A-6D show the ratios of the pressure spectra P 1 /P 3 (which is theoretically equivalent to the volume velocity transfer function) and P 1 /P ref (which just compensates for the frequency response of the loudspeaker, as was done by Delvaux & Howard [2], for example), as well as the volume velocity transfer functions H FEM calculated with the FEM.
The spectra P 1 /P 3 and P 1 /P ref clearly reflect the difference between P 3 and P ref in Fig 4A-4D. It is also notable that the proposed pressure ratio P 1 /P 3 is much closer to the FE calculation than P 1 /P ref for all four models. The RMS spectral differences in the 0 − 6 kHz range between P 1 /P 3 and H FEM are 3.6 dB, 2.8 dB, 3.8 dB and 1.0 dB for the models /a/, /u/, /i/ and / e /, respectively, while they are as high as 7.8 dB, 7.0 dB, 7.3 dB and 5.4 dB between P 1 /P ref and H FEM . Notably, the usage of P ref as the denominator in the volume velocity transfer functions has some limits, despite the fact that P ref is supposed to correct for the loudspeaker spectral characteristics. If one considered Fig 2B and assumed that the voval tract model was not there (U 2 6 ¼ 0) the leading equations would be After some analysis, and the relation Z r = n 11 /n 21 shown above, one gets Comparing Eq (14) with Eq (9), one can see at least two significant differences. First, the ratio P 1 /P ref depends on U s which in turn is not valid as we are interested in a transfer function which is, by definition, not dependent on the excitation. Secondly, this serious problem can only be bypassed if either n 11 = 0 (this is in conflict with the principle of reciprocity) or U 2 = 0 (lips closed). Both options are not valid. Further, implementation of an arbitrary shaped stiff plate to force U 2 to be zero is also not a valid approach, because for this case the transmission matrix N(ω) would by changed significantly, which in turn would affect the subsequent analysis. However, calculating the proposed pressure ratio P 1 /P 3 not only prevents the general drift compared to the true volume velocity transfer function, but may also prevent spurious spectral "defects", which might be misinterpreted as true spectral information. For example, the spurious peaks in P 1 /P ref at around 2 kHz disappear in P 1 /P 3 due to the normalization by P 3 (see Fig 6A-6D). A notable feature for /a/, /i/ and /u/ (Fig 6A-6C)-in contrast to / e / (Fig 6D)-is that there are strong zeros between 4-5 kHz. These zeros are known to be caused by the sinus piriformes which are side cavities of the main vocal tract [2]. These side cavities are not present in / e /. To assess a potential effect of the measurement method on the formant frequencies, the first four formant frequencies were determined by peak picking in the magnitude spectra of P 1 /P 3 , P 1 /P ref and H FEM for all four models. The results are given in Table 1, together with the relative formant deviations between the formants in P 1 /P 3 compared to H FEM and P 1 /P ref compared to H FEM . The average formant deviation between P 1 /P 3 and H FEM is 1.074%, and the average formant deviation between P 1 /P ref and H FEM is 1.131%. Hence, the formants of the two measured transfer functions are similarly equal to the formants of the reference FE simulation. The overall deviation between measured and simulated formants is less than 2%, which is much smaller than the differences reported in the few previous studies that made similar comparisons [1,13]. In addition, Tables 2 & 3 show the bandwidths and amplitudes of the resonances of all models and their deviation from the finite element models. The average bandwidth and amplitude deviation between P 1 /P 3 and H FEM are 25.3 Hz and 4.1 dB, and between P 1 /P ref and H FEM these values are 31.6 Hz and 5.1 dB. Also here, there are no big differences between the two measured volume velocity transfer functions. It should be kept in mind that these values strongly depend on the selected wall impedance Z wall . It is quite possible to optimize the values in such a way that the deviations of amplitudes and bandwidths are minimized. However, this would go beyond the scope of this work. A limitation of this study is the approximation of the lip openings of the vocal tract models in terms of straight cuts. For most speech sounds, the lips form a wedge-like opening of the vocal tract, which have non-negligible acoustic effects [12]. When the proposed method is used to measure the volume velocity transfer functions of vocal tract models with such realistic lip shapes, the precise positioning of the microphone in front of the closed lips might play a role, and closing the mouth may become more complicated (modeling clay could be used). Furthermore, it might become necessary to measure the pressure at multiple points on the outer double-curved surface of the closed lips. It can be expected that all these individual transfer functions differ slightly from each other. The averaged transfer function should then be considered as the result. However, in most cases we would expect that the averaged transfer function is very close to the one that is obtained when P 3 is measured in the midsagittal plane in the middle between the upper and lower lip. This issue deserves further investigation.
Conclusion
In this paper we presented a precise method for the measurement of the volume velocity transfer function of 3D-printed models of the vocal tract based on acoustic excitation with an external sound source, which avoids the obstacles and limitations involved in transfer function measurements with a glottal source, requires little special equipment (except the necessity of an anechoic chamber), and is simple to conduct. This method is an extension of the approach presented by Kitamura et al. [10] and has the advantage that the relative levels of the measured resonance peaks correspond to those of the true volume velocity transfer function, and that the overall level of the transfer function corresponds to the true level (i.e., a level of 0 dB at a frequency of 0 Hz). Furthermore, we have investigated the resulting deviation that happens without the proposed normalization. This deviation consists of a general upward drift of the spectral level with increasing frequency, and is relatively independent from the vocal tract model geometry. However, the fine structure of the spectral drift may introduce spurious peaks or troughs into the transfer function, which may cause misinterpretations. The proposed technique prevents this problem and facilitates a more accurate acoustic characterization of the increasingly used 3D-printed vocal tract models in speech and singing research than before. Although the presented procedure is not applicable to in vivo situations, it has a range of applications in basic phonetic research and the potential to improve methods for articulatory speech synthesis. Finally, the proposed method is not limited to models of the vocal tract but can be used for most kinds of tube-like acoustic resonators.
Supporting information S1 File. Printing models. Files containing the printing models of the vowels /a/, /u/, /i/, and / e /. (ZIP) S2 File. Volume meshes of the finite element models. Files containing the volume meshes as used for finite element modeling of the vowels /a/, /u/, /i/, and / e /. (ZIP) | 2018-04-03T05:17:45.171Z | 2018-03-15T00:00:00.000 | {
"year": 2018,
"sha1": "8f8eed984e6f2dda6992889b75025ba393fe3577",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0193708&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f8eed984e6f2dda6992889b75025ba393fe3577",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
208223988 | pes2o/s2orc | v3-fos-license | Intravascular ultrasound is a key diagnostic tool in subclavian vein varicosity
Varicose veins of the neck are far less common than lower extremity varicosities. Often, neck varicosities can be a sign of a more central venous obstruction. Here, we describe a patient with no risk factors for central venous obstruction who presented with a recurrent left subclavian vein (LSV) varicosity causing significant pain and discomfort that was recalcitrant to repeated phlebectomy. Venography revealed a dilated LSV with no significant venographic stenosis in the LSV or brachiocephalic vein. Intravascular ultrasound subsequently revealed a culprit hypertrophied valve that was successfully treated with valvuloplasty, resulting in durable resolution of the patient's symptoms, suggesting that intravascular ultrasound was essential in the diagnosis and treatment of this hypertrophied valve.
The prevalence of an isolated varicosity in the upper extremity or neck is exceedingly rare. The diagnosis, symptoms, and treatment of primary varicosities above the lower extremities, specifically neck varicosities, are poorly described. 1 The imaging evaluation of venous varicosities ordinarily includes duplex ultrasound. 2 More advanced crosssectional imaging, such as computed tomography venography (CTV) or magnetic resonance venography, may be considered in cases of equivocal ultrasound findings to evaluate for deep or central venous occlusion or to provide preoperative planning. 3 Finally, catheterbased venography may be performed to further evaluate the deep venous system or central veins of the chest to potentially treat the underlying pathologic process. Increasingly, intravascular ultrasound (IVUS) is being used to uncover deep venous disease that may be missed by multiplanar catheter venography and to assist with stent sizing. 4 In addition, IVUS has previously been used to characterize the cause of thoracic outlet syndrome in a number of patients. 5 The purpose of this study was to describe the usefulness of intraoperative IVUS in the diagnosis and treatment of a hypertrophied venous valve in the neck.
CASE REPORT
This report was carried out in full compliance with the Health Insurance Portability and Accountability Act and was exempt from Institutional Review Board review. The patient was informed about and consented to all diagnostic and interventional procedures and this publication.
The patient, a 56-year-old woman, presented with a single recurrent, tender, large, and palpable varicosity on the left side of the neck. The varicosity first manifested at the age of 20 years, shortly after her neck was injured from pulling a drowning victim out of a pool. The varicosity underwent phlebectomy by a general surgeon but subsequently recurred. It was again excised by a plastic surgeon, and the patient experienced no symptoms for more than three decades. However, in November 2015, symptoms of chronic pain, tenderness, and pressure recurred, after she reinjured the left side of her neck while being jerked by a dog's leash. She began conservative management including warm compresses and oral analgesics without relief. When she would bend over or perform yoga, the varicosity would bulge and become more painful and tender. She presented to an otorhinolaryngologist for surgical evaluation of the neck varicosity and was then referred to our interventional radiology practice for a possible "vascular neck mass." Her medical history was otherwise noncontributory. She left side of the neck (Fig 2, a, top). The remainder of the left upper extremity and central veins were normal. A focal stenosis in the mid LSV appeared to be <50% and was of unknown etiology.
IVUS was performed and revealed a calcified, hypertrophied valve at the site of the stenosis (Fig 2, a, bottom). The measurements of the LSV using IVUS planimetry were 137 mm 2 , 73 mm 2 , and 117 mm 2 proximal to the valve, at the valve, and distal to the valve, respectively (Fig 2, b). The patient underwent balloon valvuloplasty, given her symptoms and recalcitrance of the varicosity to previous surgical excision. Based on the IVUS measurements, 14-mm and 16-mm Atlas balloons (Bard Medical, Murray Hill, NJ) were inflated to the maximum pressure.
Postdilation IVUS revealed an incompletely disrupted valve, so an 18-mm Atlas balloon was carefully inflated to the nominal pressure (Fig 2, a, top) to minimize the risk of rupture. If rupture had occurred, balloon tamponade would have been the initial therapy, followed by stent graft placement if necessary.
Repeated venography revealed a significant improvement in the stenosis without evidence of rupture. Subsequent IVUS evaluation after valvuloplasty revealed complete disruption of the hypertrophic valve with luminal enlargement of the LSV at this level (Fig 2, a, bottom). Post-treatment IVUS planimetry revealed a subclavian vein area of 157 mm 2 (Fig 2, b). Final venography revealed no significant residual stenosis with persistent but decreased filling of venous collaterals (Fig 2, a, top). The patient tolerated the procedure without complication. Posttreatment anticoagulation was not given because the patient had a focal abnormality without scar tissue or suggestion of prior chronic venous thrombosis based on venographic and IVUS findings.
At the follow-up visit 3 weeks later, the patient noted that her symptoms had significantly improved despite no change in the appearance of the varicosity. At 8-month follow-up, her symptoms had completely resolved, and the varicosity was only minimally visible (Fig 3). The patient was now able to bend over and to perform yoga without pain in her neck. She also had a significant reduction in pain and tenderness at the location of the varicosity. In efforts to minimize cost and the patient's exposure to radiation from cross-sectional imaging, no postprocedural imaging was performed, given the improvement in symptoms.
In-office duplex ultrasound was attempted but could not visualize the treated area. Because of the distance from our center, the patient was instructed to follow up only if she experienced recurrence of symptoms. To the time of writing, 3 years have elapsed since the procedure and the patient remains without symptoms and the varicosity has completely resolved. Should her symptoms recur, the intended plan is to re-treat with valvuloplasty as stenting in this region is not ideal and subjects the patient to risks of stent fracture and thrombosis. Whereas repeated phlebectomy can be performed, the patient preferred this minimally invasive approach with comparatively little to no recovery period.
DISCUSSION
Presented herein is a report of a healthy 56-year-old woman with recurrent neck vein varicosity that was manifested as a bulging, tender neck mass. Ultrasound and CTV both indicated venous stenosis but were unable to reveal the underlying cause of the varicosity or the stenosis. Diagnostic catheter venography confirmed mild stenosis, and IVUS revealed that the cause of the abnormality was a hypertrophied valve leaflet in the distal LSV.
Central venous stenosis (CVS) is commonly secondary to hemodialysis catheter placement and the presence of implantable cardiac defibrillators and pacemakers. 6 Other causes of CVS can include extrinsic compression, peripherally inserted central catheters, prior upper extremity venous thrombosis, neoplasm, sequelae of radiation therapy, and thoracic outlet syndrome. 7 CVS secondary to traumatic injury, as presented here, is exceedingly rare but has been reported in arm hyperabduction. 8 In addition, we found one previously reported case of hypertrophied venous valves as the cause of venous stenosis. 7 This case is likely to be the result of a traumatic shear-type injury to a valve in the distal LSV that subsequently calcified and hypertrophied. Wilder et al 9 described a case of symptomatic subclavian vein stenosis due to hypertrophy of a subclavian vein, suggesting that hypertrophic valves may be a rare but possible cause of venous stenosis.
CONCLUSIONS
In the lower extremities, varicosities most often arise from the superficial venous system. However, varicosities can also be a sign of deep venous disease as exemplified in this case. Whereas ultrasound, CTV, magnetic resonance venography, and catheter venography are commonly used to evaluate the deep venous system, IVUS is increasingly being used to diagnose disease in both the arterial and venous systems. In particular, IVUS can be especially useful in assessment of the vessel wall, atheromatous plaque or venous synechiae, and valvular abnormalities. 10 In addition, it can be used to assess and to confirm successful intervention, as described in this report. Although use of IVUS may increase the cost of a procedure, the potential savings of avoiding unnecessary repeated procedures, such as repeated phlebectomy in this case, may economically justify it in certain cases. Finally, this report confirms previous evidence that hypertrophied valves may play a rare but important role in the development of venous stenosis and should be considered in the differential diagnosis. | 2019-11-14T17:07:17.170Z | 2019-11-13T00:00:00.000 | {
"year": 2019,
"sha1": "0f465652c4c93278cf3c959ac7b74994187c120f",
"oa_license": "CCBYNCND",
"oa_url": "http://jvascsurgcases.org/article/S2468428719301157/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "614645c78f585e0ba830331e2b2ce827ced9108f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2823016 | pes2o/s2orc | v3-fos-license | Insulin, CCAAT/Enhancer-Binding Proteins and Lactate Regulate the Human 11β-Hydroxysteroid Dehydrogenase Type 2 Gene Expression in Colon Cancer Cell Lines
11β-Hydroxysteroid dehydrogenases (11beta-HSD) modulate mineralocorticoid receptor transactivation by glucocorticoids and regulate access to the glucocorticoid receptor. The isozyme 11beta-HSD2 is selectively expressed in mineralocorticoid target tissues and its activity is reduced in various disease states with abnormal sodium retention and hypertension, including the apparent mineralocorticoid excess. As 50% of patients with essential hypertension are insulin resistant and hyperinsulinemic, we hypothesized that insulin downregulates the 11beta-HSD2 activity. In the present study we show that insulin reduced the 11beta-HSD2 activity in cancer colon cell lines (HCT116, SW620 and HT-29) at the transcriptional level, in a time and dose dependent manner. The downregulation was reversible and required new protein synthesis. Pathway analysis using mRNA profiling revealed that insulin treatment modified the expression of the transcription factor family C/EBPs (CCAAT/enhancer-binding proteins) but also of glycolysis related enzymes. Western blot and real time PCR confirmed an upregulation of C/EBP beta isoforms (LAP and LIP) with a more pronounced increase in the inhibitory isoform LIP. EMSA and reporter gene assays demonstrated the role of C/EBP beta isoforms in HSD11B2 gene expression regulation. In addition, secretion of lactate, a byproduct of glycolysis, was shown to mediate insulin-dependent HSD11B2 downregulation. In summary, we demonstrate that insulin downregulates HSD11B2 through increased LIP expression and augmented lactate secretion. Such mechanisms are of interest and potential significance for sodium reabsorption in the colon.
Introduction
The mineralocorticoid receptor (MR) is essential for renal sodium handling in epithelial tissues such as colon and kidney and for blood pressure control in humans. The physiological ligand of the MR is aldosterone [1]. Another adrenal steroid, cortisol, exhibits a similar affinity and transactivation potential for the MR as aldosterone. Serum concentrations of cortisol are 100 to 1000 fold higher than aldosterone. The mechanism allowing aldosterone to be the preferred ligand for the MR in vivo, despite the higher concentrations of cortisol is an enzyme which inactivates cortisol, specifically in MR expressing cells [2]. This enzyme, 11bhydroxysteroid dehydrogenase type 2 (11beta-HSD2) is encoded by the HSD11B2 gene and converts biologically active cortisol into cortisone, a steroid with negligible affinity and activation potential for the MR [3]. Thus, a reduced activity of 11beta-HSD2 causes cortisol-mediated MR activation, leading to renal sodium retention, suppression of renin and a salt-sensitive increase in blood pressure [4,5].
Many patients with type 2 diabetes have low renin activity in plasma and are salt-sensitive [6][7][8]. Furthermore, we recently observed an association of salt-sensitivity and reduced activity of 11beta-HSD2 in offspring of type 2 diabetic patients [9]. Thus, it is reasonable to speculate that insulin downregulates HSD11B2, and by this mechanism, causes cortisol-mediated renal or colonic sodium retention with consequent renin suppression.
Cell Cultures
HCT116, SW-620 and HT-29 were grown in DMEM supplemented with 10% FBS, 2 mmol/L glutamate, 100 U/ml penicillin, and 100 mg/ml streptomycin. The cells were maintained at 37uC in humidified 5% CO2-95% air. All cell lines were plated in cell culture dishes and grown in DMEM 10% FBS to confluence. Cells were incubated in DMEM supplemented with 0.3% FBS during insulin and DCA treatment. Cells were treated 48 h with DCA and 24 h with insulin. Cells were incubated with 10% FBS during lactate treatment after 24 h synchronization in DMEM 0.3% FBS.
RNA preparation and expression level
Total RNA was isolated using RNeasy Mini Kit (Qiagen AG, Basel, Switzerland) according to the manufacturer's protocol. Total RNA (1 mg) was used for the synthesis of first strand cDNA using the Improm-II Reverse Transcriptase (RT) in RT buffer (Promega Catalys AG, Wallisellen, Switzerland) according to the manufacturer's protocol. Expression of specific mRNA was determined by quantitative real-time RT-PCR (qRT-PCR) on an ABI PRISM 7000 Sequence Detection System (Applied Biosystems, Foster City, CA). Multiplex PCR was performed according to the manufacturer's protocol (Applied Biosystems, Foster City, CA). Assays-on-Demand (Gene Expression Assay Mix) were eukaryotic 18S rRNA endogenous control (4310893E), HSD11B2 (Hs00388669_m1), CCAAT/enhancer binding protein (C/EBP) alpha (Hs00269972_s1), C/EBP beta (Hs00270923_s1) and C/EBP delta (Hs00270931_s1). Relative gene expression was determined using the comparative CT (threshold cycle) method, which consists of the normalization of the number of target gene copies to an endogenous reference gene (18S rRNA), designated as calibrator. The level of HSD11B2, C/EBP alpha, C/EBP beta and C/EBP delta mRNA expression of each of the treated cells was normalized to the result obtained from untreated cells. The amount of target normalized to the 18S rRNA endogenous reference is given by the formula: 2 2DDCT . To confirm the reproducibility of mRNA determination, a minimum of 3 independent total RNA extractions were performed. Each reverse-transcriptase polymerase chain reaction (RT-PCR) assay was analyzed in triplicate and expressed as mean +/2 SD.
Measurement of 11beta-HSD2 activity
Cells were cultured in 6-well plates at a density of 0.5610 6 cells/ well. After treatment, culture medium was removed and cells were incubated for 45 min in 1 ml medium containing 2 mCi of [1,2,6, H] Cortisol (60-80Ci/mmol, Amersham, Buckinghamshire, UK) in 6-well plates. After incubation the reaction was stopped and the steroids were extracted by the addition of three volumes of ethyl acetate. After centrifugation, the organic phase was removed and evaporated at room temperature. The residue was reconstituted in 30 ml of stop solution (2 mM cortisol and 2 mM cortisone in methanol). Ten microliters were applied to silica-coated TLC plates (G-25, UV254, Macherey-Nagel, Oensingen, Switzerland) and resolved using chloroform: ethanol (9:1). Steroids were visualized under ultraviolet light and were scraped into scintillation fluid. The radioactivity was measured using a Packard 2000CA Tri-Carb Liquid Scintillation Analyzer (Packard Instrument Co, Downers Grove, IL). Experiments were carried out under non-substrate limiting conditions, where metabolism was always less than 40%. Specific activity was expressed as picomoles (pmoles) per micrograms of protein per hour. The experimental results were calculated by expressing the conversion rates of cortisol to cortisone in the presence of insulin, as a percentage of that in the corresponding control in absence of insulin [15].
For detection of IGF1 and insulin receptors cells were either untreated or treated with 100 nM insulin for 24 h and lysed in RIPA buffer containing 1 mM sodium orthovanadate, 2 mg/ml aprotinin, 1 mg/ml leupeptin, 1 mM phenylmethanesulfonyl fluoride. The lysates were incubated with either IGFR or Insulin Receptor antibodies (3027, 3025, Cell Signaling) at 4uC overnight. In the morning, the samples were incubated with Protein A/G Plus Agarose (Santa Cruz Biotechnology) for 1 h at 40uC. The beads were washed, boiled in SDS loading buffer and proteins were separated by SDS-PAGE.
De novo protein synthesis
HT-29 cells were cultured as outlined above and pretreated with the protein synthesis (or translational elongation) inhibitor, CHX (10 mM) for 1 h before the addition of insulin (10 27 M). At the end of the 24 h treatment, cells were harvested for RNA isolation and qRT-PCR analysis.
HSD11B2 mRNA stability
HT-29 cells were cultured as outlined above and treated with insulin (10 27 M) for 12 h. Transcription was stopped with DRB (25 mM) and cells were harvested at discrete times (0-12 h) for RNA isolation and qRT-PCR analysis.
Small interfering RNA (siRNA) experiments
HT-29 cells were transiently transfected using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) following the manufacturer's recommendations. The transfection mixture was removed after 24 h incubation. The cells were further incubated under normal growth conditions for another 24 h before mRNA extraction. The siRNA duplexes for C/EBP alpha or C/EBP beta (Qiagen AG, Basel, Switzerland) and a negative control siRNA (Invitrogen, Carlsbad, CA, USA) were used for transfection at a final concentration of 50 nM.
Electrophoretic mobility shift assay (EMSA) and nuclear extract preparation Around five million of adherent cells were detached with 3 ml of PBS on ice and were pelleted for 5 min at 900 g. Pellets were stored at 280uC until protein extraction. Nuclear extract preparation and EMSA were performed as previously described [17,18]. The protein yield was determined by the Bradford method. EMSA probes were generated by annealing complementary single-stranded oligonucleotides and labeled with [gamma 32 P] ATP and T4 polynucleotide kinase. Specific binding was competed with unlabeled oligonucleotides which sequence is recognized by the C/EBP factors at a 100X-molar excess (59tgcagattgcgcaatctgca-39; the nucleotide motifs of interest are bold-faced). The binding reactions were carried out in 10 ml of buffer [20 mM HEPES, pH 7.5; 35 mM NaCl; 60 mM KCl; 0.01% NP 40; 2 mM DTT; 0.1 mg/ml BSA; 4% ficoll] containing 1.75 pmol of labeled probe, 4 mg nuclear proteins and 1 mg poly (dI-dC). Mixtures were incubated at 4uC for 20 min in presence or absence of unlabeled competitor. DNA-protein complexes were separated on a 5% polyacrylamide gel in 0.56 Tris-borate-EDTA buffer for 90 min at 140 V. Gels were dried 2 h at 80uC and analyzed on a PhosphoImager Cyclone (Packard).
qRT-PCR analysis using human diabetes RT 2 Profiler PCR Arrays
The RT 2 Profiler PCR Arrays PAHS-30C (SA Biosciences, MD, USA) was designed to analyze 84 genes related to human insulin signaling pathway. The RT-PCR was carried out using an ABI PRISM 7000 Sequence Detection System (Applied Biosystems, Foster City, CA). HT-29 cells were treated for 24 h with insulin (10 27 M). Total RNA (1 mg) was used as template to synthesize cDNA with the RT 2 First Strand kit (SABiosciences). The PCR cycle condition was as follows: 95uC for 10 min, followed by 40 cycles of 95uC for 15 s, 60uC for 60 s. At the end of PCR cycling steps, data for each sample were displayed as a melting curve. The ABI SDS software (Applied Biosystems) was used to determine a critical threshold (Ct), which was the cycle number where the linear phase for each sample crossed the threshold level. Beta-2-microglobulin was used as housekeeping gene. The expression of HSD11B2 for the 3 experiments concerned was monitored in parallel by real time PCR which confirming significant downregulation by insulin. Records were deposed in the GEO data base with accession number GSE51677.
Transient transfection and reporter gene assay
Transfections were performed with FuGENE HD transfection reagent (Roche, Rotkreuz, Switzerland) using 3 ml of solution for 1 mg of plasmid. The vector pCMV-hRL (Renilla reniformis luciferase) (Promega Catalys AG, Wallisellen, Switzerland) was used for normalization of transfection efficiency. The construct p4.5 kb-HSD11B2 was a generous gift from de. K. Yang [19]. The p0.2 kb-HSD11B2 plasmid construct was described previously [18]. For expression of transcription factors, various amounts of the vectors pCMV-LIP and pCMV-LAP, a generous gift from U. Schibler [20], were added to the DNA mixture. After 6 h the transfection medium was replaced with normal growth medium for 18 h. Thereafter cells were lysed and luciferase activities were detected with the Dual-Luciferase Reporter Assay System (Promega Catalys AG, Wallisellen, Switzerland) and Media-torsPhL Luminometer (Mediators Diagnostic Systems, Vienna, Austria). Firefly luciferase activity was expressed relative to Renilla luciferase to account for differences in transfection efficiency. When a CMV-LacZ control vector was transfected, Dual-Light system (Applied Biosystems, Foster City, CA) was used to determine the luciferase activity. Transfections were confirmed by multiple independent experiments.
Bioinformatics and statistics
Data are expressed as mean +/2 SD of triplicate samples of a representative experiment repeated at least three times. Statistical analysis was performed using the Student's t test or ANOVA analyses and was followed by a contrast test with Tukey error protection. Differences were considered significant at p,0.05, *p, 0.05, **p,0.01, ***p,0.001. Transcription factor binding sites were analyzed with the Match program.
Results
Sustained insulin treatment decreases 11beta-HSD2 activity and HSD11B2 gene expression in colonic cancer cell lines Regulation of enzyme activity by insulin was examined in HSD11B2 expressing [16][17][18]21] human colonic cell lines (HCT116, SW620 and HT-29) (Fig. 1A). Cells were incubated for 24 h with insulin (10 211 M-10 27 M) in cell culture medium containing 0.3% FBS. Insulin caused a dose response decrease in 11beta-HSD2 activity in all tested colonic cell lines with a significant reduction at 10 29 M in HCT116 cell line (p,0.05). This effect was not restricted to colonic cell lines since similar results were obtained with HSD11B2 expressing [19] JEG-3 cells (Fig. S1). Due to the robust response (<30% reduction), we further characterized the molecular mechanisms in HT-29 cells.
Sustained insulin treatment decreases HSD11B2 gene expression, activity and protein in HT-29 cells in a doseand time-dependent manner We next confirmed whether insulin-reduced 11beta-HSD2 activity coincides with its gene and protein expression (Fig. 1A). Increasing concentrations of insulin ranging from 10 29 to 10 25 M caused a concentration-dependent decrease in HSD11B2 mRNA levels 24 h after treatment (Fig. 1B). A maximal effect was observed at concentration of 10 27 M, where HSD11B2 mRNA was lowered by 50% (p,0.05) (Fig. 1B); and the activity by 35% (p,0.05) (Fig. 1B). Thereafter, we investigated the time-dependent regulatory effect of insulin on HSD11B2 gene expression, activity, and protein level. As shown in Figure 1C, a time-dependent decrease in 11beta-HSD2 activity was observed with a significant reduction 12 h after treatment (p,0.05). Interestingly the HSD11B2 mRNA increased during the first 10 h and decreased thereafter. After 16 h the mRNA reached minimal levels and remained low up to 48 h (p,0.05) (Fig. 1C). In agreement, HSD11B2 protein was reduced by 18 h and 24 h of insulin treatment (Fig. 1D).
The downregulation of HSD11B2 was reversible by removing insulin from the medium 24 h after incubation. Indeed, 48 hours after the removal, HSD11B2 mRNA levels in control and insulin treated conditions were similar ( Fig. 2A). Next, HT-29 cells were treated 24 h with insulin in the absence and presence of the protein synthesis inhibitor, cycloheximide (CHX). The effect of insulin in reducing HSD11B2 mRNA was abolished in the presence of CHX, indicating that de novo protein synthesis was required (Fig. 2B). To determine whether insulin reduces the HSD11B2 mRNA stability, we assessed the half-life of HSD11B2 mRNA by a standard mRNA decay assay using 25 mM DRB, an inhibitor of mRNA synthesis. As shown in Figure 2C, insulin did not alter the half-life of HSD11B2 mRNA.
Insulin pathway analysis
In order to understand the molecular mechanism by which insulin down-regulates HSD11B2 we aimed to characterize the insulin pathway in HT-29. Western blot experiments demonstrated the expression and activation of IGF-1 (IGFI-R) and insulin receptors (IR) in a time and dose dependent manner (Figs. 3 A, B). Both receptors are phosphorylated within the first 10 min upon insulin treatment, while IR was more sensitive than IGFI-R to low doses of insulin (Figs. 3 A, B). The role of downstream kinases on insulin-dependent HSD11B2 repression was assessed using PD098059 and AKT VIII inhibitors. Figure 3C shows that both pathways, the MAPK/ERK and the PI3K pathway, mediated the insulin effect.
Total mRNA of insulin treated HT-29 cells was extracted and subjected to RT 2 profiling to quantify the expression of insulin pathway components. The Human Insulin Signaling Pathway RT 2 Profiler PCR Array profiles the expression of 84 genes related to insulin-responsive genes. Twenty two genes differentially regulated in HT-29 cells after insulin treatment are reported in Table S1 and the pathways involved are depicted in the scheme of Figure 4. RT 2 profiler revealed a characteristic pattern of insulin insensitivity, with reduced expression of insulin pathway components: IR, IGFI-R, insulin receptor substrate (IRS2) and insulin regulated glucose transporter (GLUT-4). Sustained insulin treatment also promoted glycolysis in HT-29 cells. While insulin regulated glucose transporter GLUT-4 expression was downregulated, GLUT-1 encoding messenger was increased, facilitating the import of glucose into the cells, independently of growth factor stimulation. Hexokinase 2, the enzyme which phosphorylates glucose to glucose-6-P, a rate limiting step of glycolysis, was upregulated, along with pyruvate kinase 2 (PKM2), which converts PEP into pyruvate. In contrast, the enzyme which dephosphorylated fructose 1, 6 bisphosphate into fructose-6-phosphate and contributed to antagonizing glycolysis was downregulated.
Effect of insulin on C/EBP alpha, C/EBP beta, and C/EBP delta mRNA levels We present evidence in Figure 5B, that treatment of HT-29 cells with various concentrations of insulin (10 29 -10 25 M) for 24 h caused a concentration-dependent increase in C/EBP beta mRNA expression. In contrast, insulin suppressed the expression of C/EBP alpha mRNA expression in a dose-dependent manner. At a concentration of 10 27 M, insulin decreased the C/EBP alpha mRNA by 51% (p,0.01), whereas C/EBP delta mRNA expression was unchanged. These results show that insulindependent reduction of HSD11B2 mRNA, correlates with the expression pattern of 2 out of 3 investigated members of the C/ EBP family of transcription factors in HT-29 cells.
Insulin-regulation of C/EBP alpha and C/EBP beta proteins
To investigate whether C/EBP alpha or C/EBP beta play a role in the insulin-dependent repression of HSD11B2 gene expression, the expression of C/EBP alpha and C/EBP beta in HT-29 cells were analyzed by Western blots (Fig. 5A). C/EBP alpha mRNA may lead to two polypeptides with a size of 42 kDa and 30 kDa [22,23] while C/EBP beta might evolve to an activating or an inhibitory isoform (LAP, 38 kD or LIP, 21 kDa, respectively) [20,24]. Treatment of HT-29 cells with insulin for 24 h increased the nuclear levels of C/EBP alpha (isoform 42 kDa), of both C/ EBP beta isoforms LAP and LIP, and decreased the nuclear levels of C/EBP alpha (isoform 30 kDa) in a dose-dependent manner. In parallel the expression of HSD11B2 decreased concomitantly with a maximal effect obtained at 10 26 M of insulin (Fig. 5A). However, in response to the same dose of insulin, the increase in LIP (<130 fold at 10 26 M insulin) was greater than that in LAP (<3 fold at 10 26 M insulin), resulting in a decreasing LAP/LIP ratio (Fig. 5A). Expression of C/EBP alpha (isoform 42 kDa) was slightly increased while the expression of C/EBP alpha (isoform 30 kDa) was decreased by 50% (Fig. 5A).
HSD11B2 gene expression is up-regulated by C/EBP alpha/beta silencing
The effect of C/EBP alpha/beta knockdown on HSD11B2 was assessed in HT-29 cells. There is evidence from this siRNA transfection experiment that C/EBP alpha and C/EBP beta mRNA was downregulated significantly (Fig. 5C, D, left panel). Importantly, the mRNA levels of HSD11B2 increased following transfection with siRNA against both isoforms (Fig. 5C, D, right panel).
(B) Concentration-dependent effects of insulin on C/EBP alpha, C/EBP beta, and C/EBP delta mRNA expression. HT-29 cells were treated like in (A). The level of C/EBP alpha (open circles), C/EBP beta (open squares)
, and C/EBP delta (filled triangles) mRNA was measured using qRT-PCR with S18 as internal control. Expression levels in treated cells were normalized to untreated controls (100%). Representative data for at least three independent experiments. The relative intensity was determined by densitometric scanning. The ratio of relative densities of 11beta-HSD2 to beta-actin in cells cultured in the abscence of hormone was considered as 100% (control). The ratio of relative densities of nuclear extract proteins to HDAC in cells cultured without hormone was considered as 100% (control). * LIP was undetectable in the control samples, so the LAP/LIP ratio was not calculated. (C, D) Silencing of C/EBP alpha (C) and C/EBP beta (D) was performed using siRNA. The expression of C/EBP alpha, C/EBP beta (left panel) and HSD11B2 (right panel) mRNA was measured using qRT-PCR. doi:10.1371/journal.pone.0105354.g005 Figure 6. Binding of C/EBP alpha/beta on human HSD11B2 promoter. (A) Nuclear proteins isolated from HT-29 cells bind to identified C/EBP alpha/beta sites.4 mg of nuclear extracts isolated from insulin treated (for the indicated period of time, 10 27 M) or untreated HT-29 cells were incubated with radiolabeled probe encompassing the consensus C/EBP alpha/beta site in the presence or absence of non-radiolabeled (1006) competitor probe (cons C/EBP alpha/beta or mut C/EBP alpha/beta) (lanes1-7). Arrows indicate C/EBP alpha/beta / DNA shifts (C1, C2, C3) separated from free probe by gel electrophoresis. The complex C3 is formed in presence of radiolabeled 2198 C/EBP alpha/beta probe (lanes 14-17) while complex C2 is formed in presence of radiolabeled 24362 C/EBP alpha/beta probe (lane [22][23][24][25]. Insulin regulation of C/EBP-DNA complexes The in silico analysis of the human HSD11B2 gene promoter sequence revealed 4 putative binding sites for C/EBPs located at positions 24361, 21985, 2198 and 2177 bp from the transcriptional start site ( Table 1). The site 24361 has the higher match with the consensus sequence (Table 1). Different probes were labeled and incubated in presence of nuclear extracts isolated from insulin treated HT-29 cells. EMSA performed with the probe containing the consensus C/EBP binding site revealed three specific complexes (Fig. 6A, lane 1, noted C1-3). The signals were reversed by competition with the unlabelled probe harboring the consensus C/EBP site (Fig. 6A, lane 6) while unaffected when the probe harbored the mutated C/EBP sites (Fig. 6A, lane 7). Therefore, C1-3 signals might correspond to C/EBP/DNA complexes. The C/EBP binding to the consensus probe was elevated with increased duration of insulin treatment (Fig. 6A, lanes 1-5) reflecting the increased level of C/EBP beta found by Western Blot (Fig. 5A). Interestingly, the intensity of C2 increased more than C1, C2 being more abundant relatively to C1 24 h after insulin treatment than in controls (lanes 1, 5).
The binding of SP1, a transactivating factor known to regulate HSD11B2 expression [25], to its consensus binding site (Table 2) occurred in the unstimulated condition and was slightly increased upon insulin treatment (Fig. 6B, lane 1-4), suggesting that this factor might not be involved in the HSD11B2 repression upon insulin stimulation.
Considering the original pattern of the complexes formed with the probe 2198, a ChIP assay was performed, which actually confirms the binding of CEBP isoforms. Upon insulin treatment, CEBP beta binding to HSD11B2 promoter increased in a time dependent manner, while CEBP alpha interaction decreased (Fig. 6C), in agreement with the level of respective proteins (Fig. 5A).
Modulation of HSD11B2 promoter activity by C/EBP beta isoforms
To confirm the importance of the LAP/LIP ratio in the regulation of HSD11B2 gene expression at the transcriptional level, we used a reporter assay (Fig. 7). The construct p4.5 kb-HSD11B2 encompasses the region 24.5 kb to +0.116 kb of the human HSD11B2 promoter cloned in front of the luciferase encoding plasmid pGL3. Luciferase activity was measured as an indicator of HSD11B2 promoter activity. The construct, p4.5 kb-HSD11B2 was co-transfected into HT-29 cells with the plasmids encoding the long isoform of C/EBP beta (LAP) alone or in combination with the short isoform (LIP). We used the pcDNA-LAP/pcDNA-LIP plasmid DNA ratio to represent the LAP/LIP ratio in transfected cells, while keeping the total amount of plasmid transfected constant (pcDNA3 empty vector was used to compensate DNA quantities). The luciferase activity correlated with the amount of pcDNA-LAP transfected (Fig. 7A). In contrast, increasing LIP expression decreased luciferase activity (Fig. 7B). To validate the role of characterized C/EBP beta binding sites, mutagenesis was performed. By mutating the sites 24392 and 2198, the basal (Fig 7C) and the LAP (Fig 7D) induced promoter activities were partly reduced. This data suggested that several Table 1. Probes used for the EMSA experiments with C/EBP. The weight matrix for the consensus C/EBP alpha/beta binding motif is given on top. The consensus C/EBP alpha/beta binding motif was aligned with the potential C/ EBP binding sites identified in the human HSD11B2 promoter and located at position 2177, 2198, 21985, 24362 bp. ''cons C/EBP'' and ''mut C/EBP'' designate 20 to 24-mer oligonucleotides based respectively on the consensus and mutated binding site for C/EBP. Mismatched nucleotides with matrix are underlined. In bold are the nucleotides identical to the consensus sequence and the percentage of match with the consensus sequence is indicated.2177 C/EBP, 2198 C/EBP, 21985 C/EBP,24362 C/EBP indicate the probes harboring the putative binding sites for C/EBP alpha/beta located in the human HSD11B2 promoter. doi:10.1371/journal.pone.0105354.t001 Table 2. Probes used for the EMSA experiments with SP1. promoter cloned into pGL3-basic luciferase vector (p4.5 kb-HSD11B2, 400 ng) and a dose response of LAP expressing vector (pCMV-LAP, 6.25 to 400 ng). A schematic representation of the promoter of HSD11B2 is shown on the left side. The transcriptional initiation site is indicated by an arrow (+1). The empty pcDNA3 vector was used to equalize the amount of transfected DNA in every condition and the pCMV-hRL (100 ng) was used as binding sites participated in the C/EBP mediated HSD11B2 promoter activity. Surprisingly, the reporter assay experiments failed to show any insulin-dependent regulation of HSD11B2 promoter, suggesting that insulin action might be mediated at an epigenetic level.
Matrix C/EBP A (G/C) T (A) T (G/A) G (A/T) C (G/A) G (C/A) C (A/T) A (C) AT (G/A)
The insulin-dependent lactate synthesis modulated 11beta-HSD2 activity Next, we challenged the hypothesis that lactate, a potential HDAC inhibitor and a byproduct of glycolysis, which is increased under insulin stimulation mediates HSD11B2 downregulation. Lactate secretion was quantified under insulin treatment and 11beta-HSD2 activity monitored under lactate stimulation or lactate synthesis blockage. Figure 8A shows a dose dependent increase in lactate secretion by insulin in HT-29 cells. Treatment with lactate alone significantly reduced 11beta-HSD2 activity in HT-29 and HCT116 cells (Fig. 8B). Dichloroacetate (DCA) is a pyruvate dehydrogenase kinase (PDK) inhibitor, whose action restores the normal oxidative demolition of pyruvate and thus indirectly preventing glycolysis [26]. Used alone, DCA reduced lactate production in HT-29 ( Fig. 8C) however, in combination with insulin, it reduced insulin-dependent stimulation of lactate secretion (Fig. 8C). Most importantly, DCA reduced insulindependent downregulation of 11beta-HSD2 activity (Fig. 8D).
Insulin-dependent regulation of HSD11B2
The present investigation revealed in three different human cell lines, that insulin reduces the activity of 11beta-HSD2. We report for the first time, that the dose-and time-dependent effect of insulin is attributable to diminished transcriptional activity, as opposed to the stability of the transcribed mRNA. A peculiar finding of the insulin-induced down-regulation of HSD11B2 is the increase in mRNA levels during the first 8-10 h, without a concomitant increase in the activity or protein content (Fig. 1C), an observation previously made for C/EBPs. The mechanism for this discrepancy is unknown. One possible explanation might be the temporal induction of small regulatory RNA molecules, interfering with transcription, as it has recently been demonstrated for GLUT-4, hormone sensitive lipase, fatty acid-binding protein ap2 and peroxisome proliferator-activated receptor gamma 2 genes [27,28].
Mechanisms accounting for insulin-dependent HSD11B2 downregulation
Our study suggests that an insulin-dependent decrease in HSD11B2 expression could be related to changes in the LAP/LIP ratio, chromatin structural changes or lactate production.
1-Considering decreased LAP/LIP ratio to inhibit HSD11B2 expression. An in silico analysis of the HSD11B2 promoter predicted binding-sites for C/EBPs. This is important since insulin is known to modulate the expression of two isoforms of C/EBP beta, LAP and LIP [10,11,23,29]. LAP/LIP ratio is modulated by mTOR, a downstream target of the insulin pathway, shifting C/EBP translation toward LIP translation [24]. We made the interesting observation that mTOR and AKT VIII inhibitor rescued HSD11B2 expression. Moreover, EMSA experiments demonstrated that following insulin stimulation, there was an increased association of C2 product to the HSD11B2 promoter. According to the literature this C2 product comprises a LAP/LIP dimer [20,30]. These correlations were ascertained by reporter assays showing i) an up-regulation of the promoter activity concomitant with LAP overexpression, ii) the requirement of both non-canonical C/EBP binding sites for the promoter activity, and iii) the sensitivity of the reporter construct towards the C/EBP beta LAP/LIP ratio. Taken together, the data suggest that C/EBP beta, most probably LAP, regulates the basal expression of HSD11B2, while LIP mediates insulin dependent HSD11B2 gene repression. Hence, HSD11B2 expression is regulated by LAP/LIP ratio in a way similar to HSD11B1 [12,14].
2-Other potential participants for an insulin-dependent
inhibition of HSD11B2 transcription. Despite the important findings concerning the regulatory role of the LIP/LAP ratio, some questions still remain in order to understand the mechanism of the insulin-dependent decrease of the HSD11B2 expression. In transfected, cells we observed the inability of insulin to downregulate the expression of reporter gene fused to the HSD11B2 promoter (data not shown). We first hypothesized that by transfecting a large amount of plasmid into the cells, the number of cis elements available for C/EBP proteins are far in excess. In this scenario, the newly synthesized LIP molecules in presence of insulin had the ability to bind plasmidic DNA without displacement of the bound LAP.
Because HSD11B2 transcription is activated in the first hours and inhibited in the last hours of insulin treatment, it might be possible that the stability of the luciferase protein did not reflect the real time activity of the promoter. Indeed, highly stable reporters accumulate to greater levels in cells, but their concentrations change slower relative to changes in transcription. Additional experiments, in which the promoter of HSD11B2 is cloned into a plasmid encoding for an unstable reporter gene, including for example a PEST signal, would challenge this hypothesis.
Moreover, gene repression is sometimes dependent on chromosome-embedment and additional sequences located either far away from the promoter in 59 as described for the PEPCK gene promoter under insulin treatment [31], or even within the 39 region in the intronic sequence might account for the insulindependent downregulation of HSD11B2. A sequence alignment using the VISTA program shows some sequences well conserved in intron I that could potentially act as intronic enhancers (Fig. S2) [32].
Furthermore, gene expression is also regulated by histones and DNA wrapping. Yet, transiently transfected DNA acquires a conformation, structurally different for the counterpart chromatin integrated DNA that may underlie the differences in the mechanisms of activation of the two templates [33]. Hence, we cannot exclude that epigenetic mechanisms (i.e. histone deacetylation and DNA methylation) are involved in the insulin-dependent HSD11B2 downregulation. In line with this, HSD11B2 gene contains 2 CpG islands within the promoter that indeed regulate gene expression [34]. Moreover, C/EBP beta is known to cooperate with coactivators such as SWI/SNF which only work in chromosome-embedded gene [35]. transfection efficiency control. Cells were lysed for luciferase assays 24 h after transfection, and the reading were normalized by renilla activity. (B) HT-29 cells were transfected with the plasmids p4.5 kb-HSD11B2 (400 ng), pRL-CMV (100 ng), pCMV-LAP (50 ng) and an increasing quantity of pCMV-LIP (50 ng to 400 ng). (C) HT-29 cells were transfected with the wild type p4.5 kb-HSD11B2 and p0.2 kb-HSD11B2 constructs or with the C/EBP mutated constructs. (D) HT-29 cells were transfected with the wild type p4.5 kb-HSD11B2 or the C/EBP mutated construct together with increasing concentration of pCMV-LAP. doi:10.1371/journal.pone.0105354.g007 3-The potential role of lactate production to inhibit HSD11B2 transcription. Interestingly, mRNA profiling underlined the reprogramming of the transcriptome from insulin sensitive cells towards insulin insensitive cells, with activation of the glycolytic pathway and consequently lactate production (Table S1, Fig. 4). In line with the literature, lactate secretion and pH changes were monitored in HT-29 cells upon insulin treatment [36]. On one hand, a decrease in pH was shown to inhibit 11beta-HSD2 activity in kidney tubules directly [37], while on the other hand, lactate was shown to inhibit HDAC activity directly and by this fact to regulate gene expression in HCT116 [38]. The 153 gene probes, including HSD11B2, down-regulated by all four HDAC inhibitors are listed (Supplementary Table 2 of [38]). In agreement with this observation, an inhibition of lactate synthesis by DCA reduced significantly the insulin effect, while treatment with lactate repressed 11beta-HSD2 activity in our cellular models (HT-29 and HCT116). In this respect, lactate can be considered to be a potential regulator of HSD11B2 expression, independently or in parallel to LIP/LAP. This fact is also strengthened by our previous observation of a decreased HSD11B2 expression along the rat intestine [39], which is inversely correlated with the intestinal lactate concentration [40]. Lactate is produced by bacteria of the gut and is found in the rectum in a millimolar range, when physiological situations are considered [41]. Our finding that 11beta-HSD2 activity was decreased using 50mM lactate, is consistent with the literature [38], although, it is uncertain if such concentrations could be reached locally in the gut or if such a down regulation would happen in vivo with longer exposure and lower amounts. Nevertheless, an increase in the abundance in lactic acid bacteria, associated with methylation changes in intestinal cells was reported in type 2 diabetic patients [42]. Moreover, plasmatic lactate is associated with blood pressure [43] and type 2 diabetes [44]. In addition, lactate is also produced (up to 17 mM) during ischemia in kidneys [45]. Notably ischemia was related to renal tubular dysfunctions with increased blood pressure and reduced 11beta-HSD2 activity [46]. Finally, lactate, as an indicator of oxidative capacity, was found to predict incident diabetes, since oxidative capacity is decreased in type 2 diabetes. Lactate is therefore strongly related to insulin resistance [47]. Whether decreased oxidative capacity is a cause or consequence of diabetes is unknown, but the link with HSD11B2 downregulation has to be strongly considered.
Sodium reabsorption is an important function of the kidney, but also of the rectal and colonic mucosa [48,49]. This mechanism is regulated, at least in part, by MR [50], with subsequent activation of the the amiloride-sensitive epithelial sodium channel (ENaC) [51,52]. Here, we provide evidence for an insulin-dependent downregulation of HSD11B2, a prerequisite for cortisol-mediated MR transactivation leading to an increase in sodium reabsorption Figure 8. Lactate accumulation in the media upon insulin stimulation and insulin-dependent down-regulation of 11beta-HSD2 activity. (A) Dose-response effect of insulin on L-lactate production in cultured HT-29 cells after 24 h incubation. The concentration in lactate found in the media of HT-29 cells after 24 h of culture is reported above the bars (Mean +/2 SEM). (B) 11beta-HSD2 activity in cultured HT-29 and HCT116 cells exposed to exogenous Llactate for 3 h. (C) 24 h L-lactate production in cultured HT-29 cells exposed to DCA alone or in combination with insulin. (D) 11beta-HSD2 activity in cultured HT-29 cells exposed to DCA alone or in combination with insulin. doi:10.1371/journal.pone.0105354.g008 in the colon. All together, these data suggest that the downregulation of HSD11B2 expression in cancer colonic cell lines, after long-term insulin treatment would be the consequence of LIP overexpression, together with increased lactate production, both working at an epigenetic level. These mechanisms are of interest and significance for the understanding sodium reabsorption in the colon in health and disease states. | 2016-05-12T22:15:10.714Z | 2014-08-18T00:00:00.000 | {
"year": 2014,
"sha1": "33c81c3052f70c48e528ded4f1a1c6334f4d623f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0105354&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33c81c3052f70c48e528ded4f1a1c6334f4d623f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12313679 | pes2o/s2orc | v3-fos-license | A gene-based radiation hybrid map of the gilthead sea bream Sparus aurata refines and exploits conserved synteny with Tetraodon nigroviridis
Background Comparative teleost studies are of great interest since they are important in aquaculture and in evolutionary issues. Comparing genomes of fully sequenced model fish species with those of farmed fish species through comparative mapping offers shortcuts for quantitative trait loci (QTL) detections and for studying genome evolution through the identification of regions of conserved synteny in teleosts. Here a comparative mapping study is presented by radiation hybrid (RH) mapping genes of the gilthead sea bream Sparus aurata, a non-model teleost fish of commercial and evolutionary interest, as it represents the worldwide distributed species-rich family of Sparidae. Results An additional 74 microsatellite markers and 428 gene-based markers appropriate for comparative mapping studies were mapped on the existing RH map of Sparus aurata. The anchoring of the RH map to the genetic linkage map resulted in 24 groups matching the karyotype of Sparus aurata. Homologous sequences to Tetraodon were identified for 301 of the gene-based markers positioned on the RH map of Sparus aurata. Comparison between Sparus aurata RH groups and Tetraodon chromosomes (karyotype of Tetraodon consists of 21 chromosomes) in this study reveals an unambiguous one-to-one relationship suggesting that three Tetraodon chromosomes correspond to six Sparus aurata radiation hybrid groups. The exploitation of this conserved synteny relationship is furthermore demonstrated by in silico mapping of gilthead sea bream expressed sequence tags (EST) that give a significant similarity hit to Tetraodon. Conclusion The addition of primarily gene-based markers increased substantially the density of the existing RH map and facilitated comparative analysis. The anchoring of this gene-based radiation hybrid map to the genome maps of model species broadened the pool of candidate genes that mainly control growth, disease resistance, sex determination and reversal, reproduction as well as environmental tolerance in this species, all traits of great importance for QTL mapping and marker assisted selection. Furthermore this comparative mapping approach will facilitate to give insights into chromosome evolution and into the genetic make up of the gilthead sea bream.
Background
Fish species constitute an exceedingly diverse group representing roughly half of the extant vertebrate species. More than 95 % of all living fish species are represented by the ray-finned fishes (actinopterygians) of which more than 99.8 % are teleosts. Their high level of morphological, behavioral, and ecological diversity makes the study of teleosts of real importance in attempts to address and resolve evolutionary questions. Furthermore teleost studies are of great intrinsic interest since they are economically important in both fisheries and aquaculture. In recent years due to the efforts made in genome studies of many fish species, especially of model fish species like zebrafish and Tetraodon, genomic information of vertebrates has shown a substantial increase and comparative genomics studies have become a very important method for studying genome evolution in teleosts and vertebrates in general [1] as well as for the identification of regions of conserved synteny (e.g. for review [2]).
The opportunity of comparing genomes of model fish species with those of farmed fish species can facilitate functional studies, such as the detection of candidate genes and regions for the identification of qualitative and quantitative trait loci (QTLs). Furthermore comparative genomics can improve on the time-consuming work of identifying genes affecting trait variability through QTL mapping by offering shortcuts and hypothesis-based approaches rather than random scan approaches. Nevertheless, this promising approach has until now been hampered by the limited number of genome projects because of the expensive technology involved. A powerful method that allows comparative genome analysis to be conducted by simple means constitutes comparative mapping, enabling comparison of syntenies and gene orders to be carried out [3][4][5][6][7]. Whereas for model fish species such as the zebrafish, Tetraodon, fugu and medaka, comparative mapping is a common practice, in non-model fish species of commercial as well as of evolutionary and ecological interest only a few studies have so far been published e.g. [8].
In contrast to studies concerning agricultural animals, maps of DNA markers and genes allowing QTL analysis are relatively rare for cultured fish species. However, linkage maps among aquaculture fish species are available for salmonid species [9,10], tilapia [11], channel catfish [12,13], Japanese flounder [14] and the common carp [15]. Among Mediterranean species linkage maps for Sparus aurata [16] and for another important marine aquaculture species, Dicentrarchus labrax [17] have recently been published. In addition to the genetic linkage map of the gilthead sea bream, a first generation of RH map has also been constructed [18]. Radiation hybrid mapping results in dense and reliable genome maps for comparative use, since, unlike linkage mapping, it is not dependent on polymorphism and permits easy mapping of genes and of neutral polymorphic markers.
In the present study comparative mapping is taken with the gilthead sea bream (Sparus aurata), a key species for large-scale Mediterranean aquaculture. The gilthead sea bream, a non-model fish species of commercial and evolutionary interest, is distributed in the Atlantic Ocean and the Mediterranean Sea [19,20] and represents the worldwide-distributed species rich family of Sparidae, within the Perciformes. Comparative mapping for the gilthead sea bream Sparus aurata is reported through a gene-based radiation hybrid map with 428 markers including candidate genes for QTL and 74 microsatellite markers integrated with the previously published map of [18].
Furthermore, the considerable potential of comparative mapping for transferring information from model species to non-model species is demonstrated by the exploitation of conserved synteny. This established syntenic relationship between sea bream and Tetraodon enables to virtually map on the RH map ESTs of gilthead sea bream that give a significant similarity hit to Tetraodon. The sea bream RH map facilitates the scanning for QTLs mainly controlling growth, disease resistance, sex determination and reversal, reproduction as well as environmental tolerance, all traits of great importance for aquaculture. It also contributes to the identification of regions of conserved synteny and thereby provides a resource for further comparative mapping analysis between fish species and pinpoints possible chromosomes splitting, chromosomes fusions and chromosomes rearrangements during evolution.
RH mapping
An additional 74 microsatellite markers and 428 ESTs were successfully positioned and integrated into the RH map produced by [18] (Figure 1, see Additional file 1). In total 25 RH groups were built from the newly mapped markers and from those mapped previously [18], resulting in a total number of 937 molecular markers on the Sparus aurata RH map. RH groups were renumbered compared to [18] where 28 RH groups were constructed. Since the number of chromosomes in this species is 24 [21,22], at least two of the current radiation hybrid groups must correspond to one chromosome. We anticipate that in future maps the smallest RH groups 19 and group 20, will be merged into one group as they correspond to the same genetic linkage group, and comparative mapping indicates that they also match the same chromosome in Tetraodon (see below). In this case the number of RH groups (24), that would result, correspond to the number of chromosomes in sea bream.
Quality control
The reliability of the dataset was proved by mapping a set of genes and microsatellite markers (11) mapped in the first generation of RH map [18] (sequences coming from NCBI) again with new designed primers based on ESTs coming from cDNA libraries produced within the BRIDGEMAP project (see Additional file 2). Comparison with the genetic linkage map confirms the reliability of the obtained dataset. Twenty-six of the newly mapped microsatellite markers had previously been positioned on the genetic linkage map constructed by [16], and have been used to integrate the RH map and the genetic map. Comparison of the RH map and the linkage map shows that most markers found in one linkage group are also found in a single RH group, with the exception of eight markers from four linkage groups that were placed in a different RH group ( Figure 2). For those new primer pairs were designed to confirm their position on the RH map. Furthermore, a set of markers (including markers already mapped by [18]) was genotyped twice resulting in the same vector scheme.
Locus matching
The loci, successfully mapped on the sea bream RH map, were used to search for homology against the genome of two model species, Tetraodon nigroviridis and Danio rerio. The searches were performed by running BLAT [39] with a threshold score higher than 77, as well as BLAST with a threshold E-value <10 -4 and a minimum alignment length of more than 50 bp, both against the ENSEMBL database (v.38 -Apr2006) of these three species (see Additional files 3 and 4). Searches with BLAST and BLAT generally gave similar results. The BLAST search against Tetraodon, another Perciformes, resulted in 5% more positive hits than BLAT, while against Danio there were 19% more positive hits (Table 1). In general, BLAST searches resulted in a higher number of positive matches in all three species compared to BLAT, a result inherent in the algorithms employed, which should be taken into account when using these for homology searches between species.
Comparative mapping
Comparative mapping with all marker sequences available was performed using the BLAT web server with the Tetraodon genome and the Danio rerio genome for which an ordered map is available. Comparative mapping in Tetraodon resulted in the successful assignment of 301 Sparus aurata sequences to sequences of the Tetraodon genome. Of those 62 were assigned to unordered random sequences (Un_random). The remaining 239 sequences gave synteny groups covering all sea bream RH groups, with a mismatch rate of 8% (20 markers not found in synteny groups) ( Figure 3).
Comparative mapping of Sparus against Danio with the BLAT web server gave only 90 hits, out of which 5 were not assigned to a chromosome (NA_random). Syntenic relationships between Sparus aurata and Danio were not as apparent as in Tetraodon.
Discussion
The gilthead sea bream unlike the model organisms zebrafish and medaka, mostly used to study diseases and malfunctions, is a species of great commercial interest. Consequently, considerable information has been gathered on different aspects of its husbandry, physiology, biology and pathology, while a comprehensive genomic "tool box" has been created. The basis for sea bream genomics was recently established with the creation of a first generation linkage map [16] and radiation hybrid map [18]. The power of the RH map is significantly increased in the present study with the mapping of ESTs and this will be an important resource for future QTL detection and identification of functional units. Moreover, the present RH map represents a significant tool for comparative mapping as the sea bream belongs to the successful order of Perciformes which underwent an explosive radiation 50-70 million years ago.
Comparison of the radiation hybrid map to the linkage map
In contrast to genetic linkage maps, radiation hybrid mapping allows the mapping of non-polymorphic molecular markers such as ESTs or genes. Markers are assigned based on their retention in specific members of the panel of cell lines. The current RH map gives a higher resolution of insufficiently resolved areas of the genetic map and allows recombination hot spots to be predicted ( Figure 4). Twenty-six out of the additional 74 microsatellite markers newly mapped were also positioned on the genetic linkage map by [16] and can be used to anchor the genetic and the radiation hybrid map to each other. The discrepancy of eight markers (Bd 61, Dld 24, Bmap 54-PT, SaGT1, Ad 75, Hd23, G4 and Dld 09) between the two maps occurs as it is expected that some linkage groups will be modified with the addition of new markers. Linkage group 22 contains the two markers, Bd 61 and Dld24, mapping to the RH group 2 in this study. As already mentioned in [16] it is likely that linkage group 8 and linkage group 22, both corresponding to RH group 2, will merge into a single group. This is also the case for markers Hd23, G4 (myogenic factor) and Dld09 mapping to linkage group 26 which is merging together with linkage group 18 into one group (RH12) [16]. The marker Ad75 positioned on linkage group 9 and RH 4 is likely to belong to linkage group 23 as [16] could not position this marker in relation to the other markers on linkage group 9. Ad75 was reported by [18] as an independent group (RH 25 in [18]) together with AY173035, AJ418609 and Cld 31. All four were Radiation hybrid map of Sparus aurata consisting 25 radiation hybrid groups and 937 molecular markers grouped to RH 4 (RH24 and RH25 of [18]) in this study. The linkage group 14 most likely breaks between Bmap 19-PT and Eid 11, as the distance between these two markers is large. Probably the first half on linkage group 14 including the two markers, SaGt1 and Bmap 54-PT is actually merging with linkage group 21 (linkage group 21 contains only four markers not positioned in a specific order) corresponding to RH group 18.
Comparative mapping
Previous studies using the sea bream genetic linkage map [16] and the first generation sea bream radiation hybrid map [18] Figure 5). In parallel with BLAT, BLAST searches were also performed against the same databases. Though these gave slightly more hits, they were less successful in the detection of synteny groups (data not shown), which may be attributed to the fact that, among distantly related species, BLAST can detect more divergent or shorter alignments of uncertain homology. Reciprocal BLAST searching, frequently used to establish orthology, is currently not a valid option for sea bream due to the relatively small number of ESTs available. We therefore believe that the more stringent Oxford grid showing conservation of synteny between Spa-rus aurata and Tetraodon nigroviridis, sorted by best matches between Sparus radiation hybrid groups and Tetraodon chro-mosomes Figure 3 Oxford grid showing conservation of synteny between Sparus aurata and Tetraodon nigroviridis, sorted by best matches between Sparus radiation hybrid groups and Tetraodon chromosomes. The number in each square is the number of matching genes. Sp.: Sparus, un.: unordered random sequences.
Figure 3
Matches between the Sparus aurata linkage map and radiation hybrid groups (RH groups are renumbered compared to [16] and [18]) shown in Oxford grid format, sorted by best matches Figure 2 Matches between the Sparus aurata linkage map and radiation hybrid groups (RH groups are renumbered compared to [16] and [18]) shown in Oxford grid format, sorted by best matches. The number in each square is the number of matching genes. RH: radiation hybrid groups, un.: unassigned markers. Comparison of Sparus aurata radiation hybrid group 16 with genetic linkage group 1 and radiation hybrid group 15 with genetic linkage group 4, according to data from [16].
BLAT algorithm is the preferred method for comparative mapping in this study. For the following analysis we focused on Tetraodon, because it gave more BLAT hits than Danio due to its closer kinship while also providing an ordered map [1]. As information on conserved synteny of In general, there seems to be an indication for a one-toone relationship between Sparus and Tetraodon chromosomes. Given that Tetraodon has 21 chromosomes, such a one-to-one accordance is obviously not to be expected for all chromosomes. Our data suggest that four Tetraodon chromosomes correspond to major portions of at least two Sparus radiation hybrid groups, namely Tetraodon Chr1 to Sparus RH2 and RH22, Chr2 to RH10 and RH11, Chr3 to RH24 and RH25, and Chr21 to RH19 and RG20 (the latter two RH groups may actually represent one Sparus chromosome, as noted above). The consecutively numbered RH groups in three of the cases are coincidental as the numbering of the RH groups is done randomly by the RH software. Interestingly, [23] proposed that Tetraodon chromosomes 1 and 2 (the two largest chromosomes) each correspond to two chromosomes of Danio rerio. However the authors also proposed a correspondence for Tetraodon chromosomes 7, 11, 12 and 13 with Danio rerio pairs of chromosomes, which according to our analysis corresponds to a single Sparus RH group. This may indicate that the duplication and/or rearrangement events affecting the four latter chromosomes occurred in the lineage leading to Danio, after its split with the linage leading to Sparus and Tetraodon.
Mapping more EST sequences on the RH map confirmed the well-conserved synteny between gilthead sea bream Sparus aurata and the pufferfish Tetraodon nigroviridis.
Recently a large number of new ESTs sequences were obtained from several different cDNA libraries by the Marine Genomics Europe project and more sequences are expected from other ongoing European projects, such as AQUAFIRST and WEALTH. In silico mapping of those sequences to the genome of Tetraodon (Table 2, Figure 6) can provide a first approximation as to where those transcripts are located in sea bream based on the high conservation of synteny between Tetraodon and sea bream genomes. This makes mapping of candidate genes more straightforward and also facilitates the search for conserved functional genome regions.
In order to retrieve information by comparative mapping two approaches were pursued which are described in more detail below. The first approach looked at the molecular markers mapped in sea bream to localize potential candidate genes in the Tetraodon genome. In the second approach candidate genes or ESTs available in sea bream were mapped on the Tetraodon genome (Table 2) to facilitate primer design in specific candidate regions for growth, disease resistance or sex determination and also to use them in further studies which aimed to result in higher resolution mapping of these radiation hybrid groups.
The standard approach to find a gene in classical genetics is to specify a gene product and then to try to identify the gene. In the field of molecular genetics the reverse approach is applied; genes are identified purely on the basis of their position in the genome through so-called reverse genetics or positional cloning. In the present study in silico RH mapping is demonstrated to identify candidate genes, first by localizing specific functional groups of interest in Tetraodon chromosomes, and subsequently to identify the corresponding RH groups in sea bream and to corroborate the findings by in vitro RH mapping. Three examples, namely DMRT1, gonadal P450 aromatase and cytochrome P450 aromatase are described below for which first in silico positioning was performed and then confirmed by RH mapping with primers designed within the exons of those genes. DMRT1 belongs to the highly conserved group of genes containing the DM domain, which may be involved in sex determination [24]. In Teleostei although at least six genes containing the DM domain are found their function is still unknown [25].
Looking at those genes we found that they are localized in chromosome 12 and 1 of Tetraodon and chromosome 5 in zebrafish; both Tetraodon chromosome 12 and Danio chromosome 5 correspond to RH group 16, suggesting that this RH group could be of interest for mapping of QTLs related to sex determination.
The second and third example for in silico mapping is positioned in the sex-determining region of Tilapia that was mapped to linkage group 1 in Tilapia [26,27]. Linkage group 1 of Tilapia corresponds to Tetraodon chromosome 5 and Sparus RH group 18 ( Figure 6). The gene order between Sparus RH group 18 and Tetraodon chromosome 5 is particularly well conserved compared to the other RH groups and their corresponding Tetraodon chromosomes, suggesting another specific region for QTL mapping. In this particularly well conserved region of Tetraodon chromosome 5 we found the gene for gonadal P450 aromatase, a neural marker of estrogen effect known to be involved in sex differentiation [28,29] as well as cytochrome P450 aromatase, which catalyzes the key step in estrogen biosynthesis [30,31] and is a neural marker of estrogen effect in teleosts.
The in vitro mapping of DM domain genes (DMRT 1 and 2), gonadal P450 aromatase and cytochrome P450 aro-matase to Tetraodon assigned the DM domain genes to Tetraodon chromosome 12 and the two P450 aromatases genes to Tetraodon chromosome 5. Chromosome 12 and chromosome 5 are the homologues to RH group 16 and RH 18 respectively. In silico mapping corroborated these findings allocating the DM domain genes to RH group 16 and the two P450 aromatases genes to RH group 18. In this way the correspondence between Sparus aurata and Tetraodon can facilitate the identification of genes corresponding to QTLs.
Finally, by mapping gene-based markers, potential functional units were identified mapping in radiation hybrid groups 16 and 24: on RH16 the Sparus aurata prolactin receptor [32], growth hormone receptor [33] and the homologue of osteoclast-stimulating factor and on RH 24 the Sparus aurata growth hormone gene [34], prolactin (PRL) [35] and osteocalcin gene [36], all of which are candidate genes for growth-related QTLs of potential economic interest.
Conclusion
By establishing syntentic relationships between Tetraodon nigroviridis and Sparus aurata through RH mapping of genes combined with all molecular information available today, identification of candidate genes for QTLs in sea bream is more straightforward than it has ever been. More information is expected to come from Medaka (Oryzias latipes), for which full sequences information will soon be available, as it appears to be more closely related to sea bream than Tetraodon nigroviridis (Figure 7). Furthermore, conserved synteny provides an opportunity for electronically mapping of ESTs to the sea bream RH map first by mapping them to the Tetraodon genome. This shortcut will accelerate studies in genome evolution and will give first hints into the genetic make-up of the gilthead sea bream, a species not only of great economical importance but also of considerable evolutionary interest.
RH panel
The RH panel used in the present study has been previously described [18]. Amplification of the RH panel was perfomed four times in parallel using the GenomiPhi Kit (Amersham-Biosciences). Prior to pooling the four amplification reactions each panel was tested with two primer pairs in order to verify the absence of contamination.
Development of markers
Oligonucleotide primers were designed from sea bream cDNA sequences generated out of five cDNA libraries: mixed embryonic and early larvae library, liver library [37], kidney [32], pituitary [35], 20-135 days post hatch larvae [38], using Primer 3 software [45]. When seabream cDNA aligned to the Genome of Tetraodon nigroviridis
Construction of the radiation hybrid map
Bands were scored manually as present (1), absent (0) or unclear (2). In total 960 molecular markers were genotyped. We rejected those markers with no PCR product, or where sea bream and hamster band were not clearly distinguishable. The radiation hybrid analysis was performed for 1,171 molecular markers in total including previously published vectors of [18] using the TSP approach implemented in the rh_tsp_map2 software package in conjunction with the CONCORDE package [41]. Radiation hybrid groups were generated by calculating the pairlods with retention set to the arithmetic mean of pair and all, with an initial LOD score of 3 which was then raised to 6. The resulting data were subsequently analysed by single-linkage clustering in order to obtain radiation groups [41].
Comparative genomics
BLAT searching was performed using -q = dnax and -t= dnax with a score above 80 and an alignment length of more than 50 bp as recommended for mapping ESTs to the genome across species [42]. Sequences submitted to BLAT searching came from the 937 radiation hybrid mapped ESTs and microsatellites produced within the European project BRIDGE-MAP, (present study and [18]) in addition to 31,705 EST sequences generated by the Marine Genomics Europe network and sequences of selected genes such as genes with a putative role in sex determination downloaded from the NCBI database. BLAST searches were performed using a significance threshold of an alignment length of >50 bp and an e-value of <10 -4 (Additional file 3).
Phylogenetic tree based on a combined dataset of 22 genes modified after [43] Figure 7 Phylogenetic tree based on a combined dataset of 22 genes modified after [43]. Maximum parsimony (MP) analyses of the combined amino acid alignement were performed with MEGA version 2.1 [44]. | 2017-08-03T01:19:33.987Z | 2007-02-07T00:00:00.000 | {
"year": 2007,
"sha1": "a9334f3a26ae81c4d04cb0a5070e993c7cb2872d",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-8-44",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0f663e9bb99102394dda5c3fcb2a106d570f3cd",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
246361609 | pes2o/s2orc | v3-fos-license | The anti-infective outcomes of the distal femoral replacement coated with antibiotic cement in limb salvage surgery
Abstract Background: The aim of this study was to observe the anti-infective effect of the distal femoral tumor prosthesis coated with antibiotic cement during limb salvage treatment, and evaluate its potential prospect in clinic. Methods: In this randomized controlled trial, the en bloc resection and reconstruction were performed in 36 patients with distal femoral primary bone tumor. Patients were divided into 2 groups randomly according to the application of antibiotic bone cement coating, which included antibiotic cement coating group (16 cases) and control group (18 cases). There were 10 men and 6 women in anti-infection group, aged from 18 to 54 years (23.47 ± 3.53), and there were 12 men and 6 women in control group, aged from 19 to 56 years (24.16 ± 4.32). The tumor type, age, sex, and Enneking stage were enrolled with well-matched of the 2 groups of patients. There was no difference between bundles and routine standard care for each group. The antibiotic cement was coated on the surface of polyethylene jacket with punched holes during operation. The peri-prosthetic infection, local recurrence and distant metastasis were followed up and limb functions were evaluated by Musculoskeletal Tumor Society 93 (MSTS93) scoring system. Results: Patients were followed up till 34.7 months (range 18∼62 months). There was no periprosthetic infection in anti-infection group. Four cases in control group showed deep infection. Infection rate had significant differences between the 2 groups (P < .05). Infection-related prosthesis mortality was 0% (0/16) in anti-infection group and 16.67% (3/18) in control group. Local recurrence and distant metastasis occurred in 7 of 34 patients with primary malignant bone tumor, wherein 2 cases of local recurrence and 1 cases of distant metastasis occurred in anti-infective group; 2 cases of local recurrence and 2 cases of distant metastasis occurred in the control group. During a latest follow-up, MSTS93 function scoring revealed a mean of 25.6 ± 4.2 in anti-infection group and 18.5 ± 3.3 in control group. The survival rate of anti-infective group is 75%, and the survival rate of control group is 61.11%. Conclusion: The antibiotic cement-coated technique on the surface of the polyethylene jacket of custom-made distal femoral prosthesis is simple and effective in controlling the periprosthetic infection after tumor prosthesis reconstruction.
Introduction
With improvements in the comprehensive treatment of bone tumors and prosthetic techniques, limb salvage treatment of malignant bone tumor has become the mainstream. Local complete resection of bone tumors and prosthetic replacement can effectively preserve limb function and greatly improve postoperative quality of life. [1,2] However, the tumor prosthetic replacement often leads to various complications, such as softtissue failures, aseptic loosening, structural failures, infection and tumor progression. Besides tumor progression, the deep infection is the most serious of these complications, and resulting in multistep operations for recovery, and sometimes failure of limb salvage. [3,4] Surgical site infection (SSI) or periprosthetic joint infection (PJI) has a significantly higher incidence of bone tumor prosthetic replacement than nontumorous prosthetic replacement. Usually, postoperative infection requires irrigation and debridement, 2stage revision, or amputation. This will severely increase medicare payments and worsened quality of life in patients. In China, a survey of revision burden due to PJI after total hip or knee arthroplasty showed that 429 (1.77%) of 23,443 knee arthroplasty patients had revision, of which PJI revision burden was 205 (0.85%), and PJI was the most common cause for knee revision. [5] There are also studies using the 2013 Nationwide Readmissions Database (NRD), health care resource utilization was compared between propensity score matched patient groups with and without SSI-related readmissions within the 90-day episode of care following total joint replacement. The results showed that SSIs were associated with significantly longer hospital length of stay and increased costs following hip and knee joint replacement procedures. Among them, SSI related knee arthroplasty extra hospital days ranging from 4.9 to 5.2 days and extra cost ranging from $12,689 to $12,890. [6] Some scholars also followed up SSI cases after knee arthroplasty for 2 years and found that the cost of SSI treatment was 8 times that of uninfected controls. [7] Therefore, it is very important to effectively reduce SSI or PJI.
Some researchers designed bone tumor prosthesis coated with silver to overcome high infection rate of giant bone tumor prosthesis. [8] However, preparation of this prosthesis is very complex and expensive, with unclear antibiotic mechanism, which confines its application. Referring to successful application of gentamicin bead chain and packing with antibiotic cement in revision hip arthroplasty, [3,4] we developed a custom-made bone tumor prosthesis coated with antibiotic cement for distal femoral tumors, and compared with traditional custom-made prosthesis to investigate its effect of infection control.
General information
A total of 34 patients receiving en bloc resection and reconstruction using the custom-made distal femoral prosthesis for treatment of distal femoral malignant or invasive bone tumors between June 2010 and June 2014 were selected. This study was approved by the ethics committee of the Fourth Military Medical University. Among these, there were 22 men and 12 women, aged from 18 to 56 years (23.59 ± 3.96), and 19 tumors were at the left side and 15 cases at the right side. The bone tumor types included: 19 cases of osteosarcoma, 9 cases of giant cell tumor of bone, 3 cases of chondrosarcoma, 3 cases of Ewing sarcoma. According to the application of antibiotic cement coating, patients were randomized into anti-infective group (16 cases) and control group (18 cases). The antibiotic cement (gentamicin sulfate) was coated on surface of polyethylene jacket with punched holes during operation. The comparison of patient information between the 2 groups is shown in Table 1. There were 81.25% (13/16) patients in anti-infective group and 77.78% (14/18) patients in control group accepting chemotherapy treatment after operation, respectively. The periprosthetic infection, local recurrence, and distant metastasis were followed up and limb functions were evaluated using MSTS93 function scoring.
Surgical procedure
Preoperative preparation: Preoperative biopsy was performed to confirm tumor features. Patients with primary malignant bone tumors (except for chondrosarcoma) firstly received novel adjuvant chemotherapy for 3 times, and efficacy was assessed.
After that x-ray and MRI examinations of the affected limbs were performed again to re-evaluate tumor range and tumor prosthesis was designed according to osteotomy range, and the relevant data were sent to Beijing Chunlizhengda Co,.Ltd. for preparing prosthesis. The prosthesis was a custom-made axial bone tumor prosthesis, with a 2 mm layer of polyethylene jacket. Holes were drilled uniformly on the polyethylene jacket, with the diameter of 2.5 mm, the depth of 2 mm, and the pitch of 1.5 cm (Fig. 1). The antibiotic bone cement was commercialized (Rabin corporation, France). Monomer (40 g) included: polymethylmethacrylate 83.8% (33.52 g), benzoylperoxide 2.8% (1.12 g), barium sulfate 9.6% (3.84 g), and gentamicin sulfate 3.8% (1.52 g). Solvent (16.4 g) included: methacrylate 85.3% (13.99 g), butyl methacrylate 13.2% (2.16 g), N, N-dimethyl-p-toluidine 1.5% (0.24 g), and hydroquinone 20 ppm. The bone cement coated on prosthesis surface was low-viscosity self-curing radiopaque antibiotic cement.
Surgical procedure: Patients received continuous caudal or general anesthesia, incisions were made at lower thigh and medial or lateral knee joints according to the distal femoral tumor Table 1 Comparison of basic information and baseline data of patients. location, then the biopsy channel was removed, layer by layer dissection was performed to reveal distal femoral tumor while retaining a layer of normal soft tissue in the tumor margin. The planned osteotomy segment was revealed and the femur was truncated, and the medullary cavity tissue at the broken end was sent for frozen pathological examination and the results confirmed there was no tumor invasion. All the ligaments of the knee were separated while protecting posterior nerves and blood vessels and other structures, and the distal femur was completely resected. After flushed with saline, the medullary cavity in the large femur was expanded according to the custommade prosthesis, tibial flateau osteotomy was performed and the tibial bone medullary cavity was expanded. The axial bone tumor prosthesis was assembled to determine the lower extremity force line, length, knee joint range of motion and stability, then the prosthesis was fixed using anti-infective cement. In the antiinfective coating group, the antibiotic bone cement was mixed evenly according to the solid-liquid ratio 2 g:1 mL, stirring the solid-liquid mixture in the same direction for about 3∼5 minutes, subsequently, coated the solid-liquid mixture on the surface and holes of the polyethylene jacket slowly and evenly. About 5 to 8 minutes, the bone cement and polyethylene jacket would be firmly bonded together (Fig. 1). The amount of cement on the surface was 10 to 15 g and thickness was 2 to 3 mm. After solidification of the bone cement, it was flushed with saline, and a negative pressure drainage was placed. Then layer by layer suture was performed to close the incision. Postoperative patients received intravenous drip of antibiotics for 3 days, and they began CPM-assisted extension and flexion of knee joints at 2nd day after the surgery. The drainage tube was removed at 3 to 5 days after the surgery, and patients walked with crutches. MSTS93 scoring system was applied for functional assessment in patients during follow-up. This system includes numerical values from 0 to 5 points assigned to each of the following 6 categories: pain, level of activity and restriction, emotional acceptance, use of orthopedic supports, walking ability, and gait. The final MSTS score is calculated as a percentage of the maximum possible score; the higher the percentage, the better the functional outcome.
Statistical analysis
Statistical analysis was performed using SPSS22.0 software (SPSS Inc, Chicago, IL). Comparisons between groups were performed using x 2 test of 4-fold table, t test, and rank sum test Kaplan-Meier survival analysis for estimation of implant survival, where P < .05 was considered statistically significant.
Result
A total of 34 patients received en bloc resection and reconstruction using the custom-made distal femoral prosthesis for treatment of distal femoral malignant or invasive bone tumors. There was no significant difference in the sex (P = .800), age (P = .437), tumor type (P = .420), affected limb (P = .515), and Enneking stage (P = .218) of patients between the control and anti-infective groups. The mean follow-up of all the patients was 34.7 months (range 18∼62 months). The results of x-ray showed that antibiotic cement did not appear ecclasis and shedding during the follow-up period.
The postoperative infection and treatment of the patients are shown in Table 2. Patients 1∼4 are the control group. In all infected patients, 3 cases were infected with Staphylococcus aureus and 1 case was infected with Acinetobacter cloacae (patient 4). All infected patients received chemotherapy treatment after surgery, except patient 3. Four cases had periprosthetic infection in control group. Among these 4 patients, 1 case, infection occurred at 3 months after surgery and controlled after receiving infusion and drainage. Infection occurred in 1 case at 16 months after surgery and was not controlled after receiving infusion and drainage and 2-stage revision, then received amputation. In 1 case, periprosthetic infection occurred within 13 months, and the infection was not controlled after receiving infusion and drainage, then receiving two-stage revision by anti- In the last follow-up, MSTS93 function scoring (Table 3) showed that the average scores were 25.6 ± 4.2 in the antiinfective group, of which 9 cases were excellent, 4 cases were good, 2 cases were moderate, and 1 case were bad, with an excellent or good rate of 81.25% (13/16). The average score in the control group was 18.5 ± 3.3, of which 9 cases were excellent, 5 cases were good, 2 cases were moderate, and 2 cases were bad, with an excellent or good rate of 77.78% (14/18). There was significant difference in the MSTS93 function average scores of patients between the control and anti-infective groups (P < .001).
During the follow-up period, 2 cases of local tumor recurrence in the anti-infective group, both underwent amputation and 1 cases of distant metastasis. There were 2 cases of local recurrence in the control group and underwent amputation, and 2 cases of distant metastasis. There was no significant difference in the incidence of tumor local recurrence and distant metastasis between the two groups (x 2 = 0.062, P = .803). Two patients in the anti-infective group and 2 patients in the control group were received 2-stage revision due to other reasons. The prosthesis survival rate of the 2 groups of patients is shown in Figure 2; the survival rate of anti-infective group is 75% (12/16), and the survival rate of control group is 61.11% (11/18).
Discussion
Presently, peripheral infection after tumor prosthesis replacement has become the most significant and most troublesome complications for surgeons and patients. This study presents a composite prosthesis, consisting of an antibiotic cement coating on the tumor prosthesis, that could effectively reduce the surgical site infection after distal femoral replacement and improve the patient's limb function.
Reasons for high incidence of infection after tumor prosthesis reconstruction included: a wide range of soft tissue resection, long surgical duration, and immune suppression caused by chemoradiotherapy. [9] In addition, soft tissue defects after tumor resection was likely to induce dead space which leads to hematocele and dropsy, and the repulsion of prosthesis and lacuna between peripheral soft tissues were also likely to cause postoperative infection. At present, the postoperative infection rate after nontumorous prosthetic replacement has been reduced to 0.7% thanks to the improved preparation technique of prosthesis, standardized surgical operation and rational use of drugs, among others, [10] whereas the infection rate after tumorous prosthetic replacement is still as high as 12.5% to 30%. [11] Postoperative infection after prosthesis reconstruction often exerts great pain and economic burden, severe limitation of articular function to patients, and even leads to amputation when infection is uncontrolled. [12] To overcome problems of postoperative infection after prosthesis replacement for bone tumor, many scholars committed to developing antibiotic coating for prosthesis surface to control periprosthetic infection. However, the above methods have deficiencies such as fast drug release, tissue toxicity, effects on mechanical bonding strength of prosthesis and bone, among others. [13,14] In recent years, some researchers proposed a biodegradable antibiotic sustained release system [15] ; however, its application is still limited due to the deficiencies in material composites and drug-controlled release and other techniques. Arne et al developed silver-coated bone tumor prosthesis [8] ; however, it has not been widely used due to the complex preparation, high cost, unclear antibiotic mechanisms, and the risk of toxic and side-effects. Other researchers reported to construct an antibiotic controlled release microsphere system on surface of low-modulus of elasticity b titanium alloy implant, so as to develop new techniques for preparing antibiotic coating for metallic surface, [16] whereas it has not been applied in clinic. Therefore, it is necessary to investigate on the design and preparation of a novel anti-infective bone tumor prosthesis.
In 1970, Buchholz and Engelbrecht for the first time proposed to prevent postoperative infection after joint replacement using antibiotic cement, and reported that the postoperative infection was reduced from 6% to 1.6% after applying antibiotic cement in total hip replacement. [17] The antibiotic cement has been gradually applied in clinic after it has been confirmed to reduce infection after joint replacement. Yoo et al [18] applied the antiinfective cement rods in knee arthroplasty and achieved good results. Wahlig [19] proved that after placing gentamicin cement bead chain in animal osteomyelitis lesions, its local concentration was significantly higher than the concentration using intravenous administration and concentration required for therapy. About 11% of the gentamicin was released in the first 24 hours, and it was retained in an effective bacteriocidal concentration within 15 weeks. Its concentration in the serum and urine were 0.3 mg/ mL and 0.1 mg/mL, respectively, which was proved to have no adverse effects on renal cells and was toxicity free by using renal culture. The local antibiotics concentration was elevated after applying antibiotic cement, which not only improved local antiinfective effect after prosthesis replacement, but also prevented toxic and side-effect of systemic administration.
These mature and effective local antibiotic methods inspired that whether the gentamicin bean chain or antibiotic cement techniques can be applied to prevent periprosthetic infection after prosthetic reconstruction for bone tumors? In this study, the punching in the polyethylene jacket was carried out on the surface of custom-made axial bone tumor prosthesis, and uniformly coated gentamicin cement with antibiotic release effect on holes and surface of the polyethylene jacket. Local anti-infective effect was achieved through the release of antibiotics to prevent toxic and side-effects caused by systemic administration.
This design retains the supporting strength of the prosthesis metal structure, and holes on the surface of polyethylene jacket ensure to prevent shedding of the bone cement, which ensures the safety and effectiveness of bone tumor prosthesis. This technique is simple, without prolonging surgical time and without increasing economic burden. Our clinical results showed that during the follow-up and among the 16 patients receiving antiinfective prosthesis, no infection and no shedding of bone cement from polyethylene jacket surface were found. Although among the patients without receiving anti-infective prosthesis, the postoperative infection rate was up to 22.22% (4/18), of these 2 patients underwent the 2-stage revision and 1 patient underwent amputation. The postoperative periprosthetic infec-tion had statistically significant difference between the 2 groups. Thereafter, anti-infective cement coating is able to effectively prevent periprosthetic infection after bone tumor prosthesis, and it is a simple and convenient method.
Of the 34 patients with primary malignant bone tumors, the incidence of postoperative local recurrence and distant metastasis was 20.58% (7/34), of these 2 cases appeared local tumor recurrence and 1 cases appeared distant lung metastasis in the anti-infective coating group, whereas 2 cases and 2 cases in the no anti-infective coating group had local tumor recurrence and distant metastasis, respectively. The incidence of local recurrence and distant metastasis had no statistically significant difference between the two groups (P = .803), indicating the anti-infective coating did not have significant effect on the tumor control. Postoperative MSTS93 score showed that the excellent or good rate in the anti-infective coating group was significantly improved compared with the no anti-infective coating group, where their average scores were 25.6 ± 4.2 and 18.5 ± 3.3, respectively, suggesting that the postoperative periprosthetic infection had significant effect on limb functions.
Antibiotics mixed in the antibiotic cement that are widely used mainly include tobramycin, gentamicin and vancomycin. [20,21] The tobramycin and gentamicin have been widely used in antibiotic cements due to broad antibiotic spectrum, good thermal stability and quick absorption. Although all the microbes can cause periprosthetic infection after prosthesis replacement, the most common pathogens are plasma coagulase-negative staphylococci and Staphylococcus aureus. Berbari et al [22] cultured periprosthetic infection tissues after receiving early and middle-phase prosthesis replacement, and their results revealed that agglutination-negative staphylococci accounted for 30% to 43% and Staphylococcus aureus accounted for 12% to 23%. Our preference for local administration of gentamicin for peripresthetic infection was mainly based on the bacteriological test results. [22] Although it is still controversy on preventing periprosthetic infection with antibiotic cement after receiving prosthesis replacement, the efficacy of bone cement containing antibiotics in preventing and treating infection after prosthesis replacement has been supported via animal experiments and clinical data, and its mechanism has become clearer. However, its effectiveness under specific conditions as well as interactions among organisms, bone cement, and antibiotics need further investigation, and the resistance caused by antibiotic cement has yet to be resolved.
In this study, there are still certain limitations. First of all, the number of cases in this study is only 34, and the cases number is relatively low. Therefore, in the next study, we will expand the number of cases to further prove the anti-infection effect of this prosthesis. Second, this study will further expand the follow-up time to observe the long-term efficacy of the prosthesis.
Conclusions
The custom-made punching on polyethylene jacket on surface of distal femoral tumor prosthesis and antibiotic cement coating can effectively control infection after distal femoral resection and prosthesis reconstruction, and enhance the prosthesis effect and limb functions, as well as improve the life quality of the patients. This method is very simple and convenient, and worth clinical promotion, although its long-term efficacy needs further followup observation. Author contributions | 2022-01-29T06:17:19.849Z | 2022-01-28T00:00:00.000 | {
"year": 2022,
"sha1": "c71f4506172ec37c60f63aea8828118c480b0ea7",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d059855a1ea58d0e8196270a324e3b4e4f7a5b80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221124539 | pes2o/s2orc | v3-fos-license | Interplay between Position-Dependent Codon Usage Bias and Hydrogen Bonding at the 5ʹ End of ORFeomes
Redundancy of the genetic code creates a vast space of alternatives to encode a protein. Synonymous codons exert control over a variety of molecular and physiological processes of cells mainly through influencing protein biosynthesis. Recent findings have shown that synonymous codon choice affects transcription by controlling mRNA abundance, mRNA stability, transcription termination, and transcript biosynthesis cost. In this work, by analyzing thousands of Bacteria, Archaea, and Fungi genomes, we extend recent findings by showing that synonymous codon choice, corresponding to the number of hydrogen bonds in a codon, can also have an effect on the energetic requirements for unwinding double-stranded DNA in a position-dependent fashion. This report offers new perspectives on the mechanism behind the transcription-translation coordination and complements previous hypotheses on the resource allocation strategies used by Bacteria and Archaea to manage energy efficiency in gene expression.
synonymous mutations (6,7). The specific arrangement of synonymous codons in coding sequences (CDSs) has been shown to serve as a regulatory mechanism for translation dynamics (8) and protein cotranslational folding (9). In particular, the 5=-end region of CDSs has strong effects on translation where synonymous codon choice is associated with targeting efficiency of signal peptides (10), ramping of translation efficiency (11), local folding energy (12), modulated protein expression (13), and recognition of nascent peptides by the signal recognition particle (14).
Similarly to translation, codon usage bias has been associated with transcriptional selection (15) and optimization of transcription efficiency (16). Recent reports support the idea that codon variants also define the energy and cellular resources required for transcript biosynthesis (17)(18)(19)(20) and the speed of transcript elongation (21). However, in contrast to translation, the potential links between position-dependent codon usage bias at the 5= end of CDSs and transcription have yet to be thoroughly investigated as it is difficult to disentangle the effects operating at the level of transcription from those operating at the level of translation, where position-dependent codon usage bias is known to have an effect (3)(4)(5).
During transcription, helicases melt the hydrogen bonds in double-stranded DNA (dsDNA) (22)(23)(24)(25) to expose the single-stranded DNA (ssDNA) template sequence, while RNA polymerase produces the RNA molecule (26). Although the role of helicase can be active or passive (27), the dsDNA unwinding process requires energy (28) and successful unwinding of the dsDNA is a determinant in preventing abortive transcription and translation initiation (29). In this work, we explore whether the previously established position-dependent arrangement of codons can also create a position-dependent energetic requirement to unwind dsDNA by controlling the number of hydrogen bonds. Our central hypothesis stems from the fact that increased GC content of a gene increases the number of hydrogen bonds in its dsDNA, thereby demanding higher unwinding energy (30).
Here, by first analyzing the ORFeome (the set of all CDSs in a genome) of Escherichia coli as a model and subsequently extending the investigation to a more comprehensive set of over 14,000 ORFeomes, we provide genomic evidence that codon usage bias creates an exponentially increasing ramp of hydrogen bonding at the 5= end of CDSs in Bacteria and Archaea. The findings in this study are not intended to provide evidence for stronger positional selection of codons for transcription efficiency over the wellestablished theories of position-dependent codon selection in translation efficiency (11) and mRNA secondary structure (12). Instead, our results suggest that as another layer of a potential biological role, position-dependent codon usage bias creates a position-dependent energetic requirement for unwinding dsDNA. This report provides novel insights into the evolution of molecular traits and the trade-offs between the genetic code and the physiology of organisms.
RESULTS
Effects of codon variants on hydrogen bonding and its positional dependency at the 5= end of the E. coli ORFeome. We began our analysis by categorizing codons according to their hydrogen bond content (Fig. 1). The number of hydrogen bonds in a codon is directly coupled to the GC content of a codon due to the Watson-Crick base pairing of nucleotides (31). Each codon can contain six to nine hydrogen bonds, but most codons tend to have seven or eight (Fig. 1A). All degenerate amino acids have choices for codons with different numbers of hydrogen bonds (Fig. 1B), and the relative content of hydrogen bonding of a codon can be decreased by 25% according to the synonymous codon choice (Fig. 1C). The range of choices for hydrogen bonding becomes wider in accordance with position-dependent codon usage bias, where the overall and local hydrogen bond composition of a CDS can be fine-tuned by introducing synonymous mutations (Fig. 1D).
All CDSs in the ORFeome of E. coli K-12 substrain MG1655 were analyzed to test whether the number of hydrogen bonds follows a positional dependency at the 5= end. The mean number of hydrogen bonds in each codon position was calculated. We observed that the number of hydrogen bonds per codon gradually increased in a position-dependent manner until about the 15th codon position. After this codon position, the number of hydrogen bonds converged to levels of carrying capacity that remained similar until the 250th codon position (Fig. 1E). Subsequently, we discretized codons into the following two groups according to their hydrogen bond content: "cheap" codons (with six or seven hydrogen bonds) and "expensive" codons (with eight or nine bonds). We observed that the members of the group of cheap codons are utilized with high (ϳ65%) frequency and that their use then decreases gradually in a position-dependent manner until an equilibrium is reached at about the 15th codon position (Fig. 1F). From the 15th codon position to the 100th, the frequencies of utilization of cheap and expensive codons do not vary by more than ϳ5%, with cheap codons appearing much less frequently than expensive codons (Fig. 1F).
Taken together, these results show that the choice of different synonymous codons can affect hydrogen bonding and that the E. coli ORFeome apparently uses this flexibility in a way that smoothly increases the energetic requirement for unwinding the dsDNA molecule in CDSs.
Lower hydrogen bonding at the first CDS of operons in E. coli. One biological interpretation of the observed position-dependent hydrogen bonding is that it may favor CDS transcription according to the modulated efficiency of dsDNA unwinding. Thus, evolution might reflect differential selective forces for hydrogen bonding optimization acting on the CDSs of operons with more than one CDS. Specifically, if Seq1, Seq2, Seq3 TGC TGT ATA ATC ATT CCA CCC CCG CCT TAC TAT GAC GAT CAC CAT TTC TTT AAC AAT GGA GGC GGG GGT ACA ACC ACG ACT AGA AGG CGA CGC CGG CGT GAA GAG AAA AAG TAA TAG TGA GCA GCC GCG GCT CAA CAG CTA CTC CTG CTT TTA TTG AGC AGT TCA TCC TCG TCT GTA hydrogen bonding has an effect on transcription, the first CDS within an operon, being closest to the beginning of the transcriptional unit, should be better optimized for lower hydrogen bonding than internal CDSs. To test this hypothesis, the number of hydrogen bonds of CDSs according to the position they occupy within an operon in E. coli was quantified ( Fig. 2A). Only operons containing two or more CDSs were analyzed, and the downstream analyses focused on the first three CDS positions within an operon as the number of operons with more than three CDSs is low (less than a third of the number of operons with two CDSs) ( Fig. 2B). We observed that CDSs in the first position within an operon (i.e., CDS 1) had a significantly lower number of hydrogen bonds (Wilcoxon test, P Ͻ 0.05) than the internal CDSs (i.e., CDS 2 and CDS 3) in the majority of the codon positions along the length of a CDS (Fig. 2C).
The preference for a lower number of hydrogen bonds appeared weaker downstream of the 20th codon position as the difference in the hydrogen bonding between CDS 1 and subsequent CDSs became consistently and gradually less significant as indicated by both the pairwise comparisons (Wilcoxon test) and group rank differences according to CDS position (Kruskal-Wallis test) (Fig. 2C). In the codons in positions 81 to 100, the number of hydrogen bonds between CDS positions was not significant (Kruskal-Wallis test, P Ͼ 0.05). The number of hydrogen bonds in CDS 2 was significantly lower (Wilcoxon test, P ϭ 0.0082) than that in CDS 3 primarily in codon positions 1 to 20 (Fig. 2C). However, differences in hydrogen bonding based on CDS position were found to be preserved in comparisons of codon positions from 1 to 100 (Fig. 2D) and over the entire length of a CDS (Fig. 2E). Together, these results suggest that the proposed transcriptional efficiency hypothesis favors the beginning of the transcription unit in E. coli.
Highly transcribed CDSs require a lower maximum capacity of hydrogen bonds per codon in E. coli. An alternative approach to assess the proposed association between position-dependent hydrogen bonding and dsDNA unwinding energy is to study whether there are differences in hydrogen bonding between CDSs with different expression levels. We hypothesized that if CDSs prefer codons with a lower number of hydrogen bonds at the 5= end to optimize transcription, the position-dependent hydrogen bonding might be differentiable according to transcript abundances. By analyzing the transcriptome sequencing (RNA-Seq) data of E. coli generated under 16 different sets of conditions (32) as illustrated in Fig. 3A, we found that highly transcribed CDSs required lower levels of hydrogen bonding (Fig. 3B) and that the level of hydrogen bonding was generally lower in most codon positions from 1 to 100 (Fig. 3C) than with the minimally transcribed CDSs. The differences in the levels of hydrogen bonding increased with the level of disparity in transcript abundances between highly and minimally expressed CDSs (Fig. 3B), suggesting that a preference for lower numbers of hydrogen bonds helps to optimize transcription (Fig. 3B). The positiondependent hydrogen bonding of randomly selected CDSs indicated that most CDSs still exhibited a ramp regardless of transcript abundance (Fig. 3B). Overall, we observed that highly transcribed CDSs in E. coli required a lower maximum capacity of hydrogen bonds per codon, suggesting that the energetic requirement to unwind the dsDNA is lower for highly transcribed CDSs than for minimally transcribed CDSs (Fig. 3).
Distinguishing position-dependent hydrogen bonding from translationrelated and mRNA secondary structure-based phenomena in E. coli and Saccharomyces cerevisiae. In order to support the hypothesis of the transcriptional relevance of position-dependent hydrogen bonding and to distinguish it from the already known translation-related and mRNA secondary structure-based hypotheses, we assessed the potential relationships between the position-dependent hydrogen bonding and the metrics traditionally used in codon usage bias studies for E. coli and S. cerevisiae (to gain insights into potential differences between Bacteria and Archaea and eukaryotes). The metrics computed as a function of the codon position were (i) the frequency of preferred codons (determined using relative synonymous codon usage [RSCU] data), (ii) mRNA secondary structure folding (using the probability of base pairing), (iii) codon optimality (using the codon adaptation index [CAI]), (iv) translation efficiency (using the tRNA adaptation index [tAI]), and (v) hydrogen bonding.
We observed a ramp in all the codon usage metrics, mRNA folding, and hydrogen bonding as a function of codon position in E. coli (Fig. 4A). In contrast, the results obtained for S. cerevisiae showed that hydrogen bonding and mRNA secondary structure formation appeared unrelated (Fig. 4A). In order to understand the potential associations among all the computed metrics, a correlation network analysis was conducted (Fig. 4B). We found that hydrogen bonding significantly (adjusted P Ͻ 0.01) and strongly (Spearman's ϭ 0.51) correlated with the mRNA secondary structure in E. coli but not in S. cerevisiae (Spearman's ϭ 0.28) (Fig. 4C). Consistently, the ramps found in mRNA secondary structure and hydrogen bonding were found to be strongly related in the first 15 to 20 codons only in E. coli (Fig. 4D). Overall, the observed correlation suggests that selection acts to maintain tightly associated ramps in mRNA secondary structure and hydrogen boding only in E. coli (Fig. 4).
In order to assess whether these observations could be extended to other microorganisms, we deployed the same analyses on a set of model Bacteria, Archaea, and Fungi (see Fig. S1 in the supplemental material). Although the conclusions remained largely the same for the other ORFeomes, there were some differences. For example, similarly to the results seen with E. coli, a ramp was also observed in all the codon usage metrics and hydrogen bonding as a function of codon position in the archaeon Haloferax volcanii, but this was not the case for the other model ORFeomes analyzed (Fig. S1A). Although the bacterium "Candidatus Methylacidiphilum kamchatkense" and the archaeon Methanosarcina acetivorans did not show a clear positional dependency on the frequency of preferred codons (RSCU), codon optimality (CAI), and translation efficiency (tAI), mRNA folding and hydrogen bonding showed a ramp (Fig. S1A), indicating that the hydrogen bonding phenomenon is distinguishable from the other codon usage-related phenomena in these organisms. In general, position-dependent hydrogen bonding was found to be tightly related to the mRNA secondary structure formation in the model Bacteria and Archaea but not in the model eukaryotes ( Fig. S1B to D).
Modeling the hydrogen bonding ramp in E. coli. After investigating the biological relevance of the ramp of hydrogen bonding as a function of transcriptional unit (Fig. 2) and gene expression ( Fig. 3) as well as identifying its association with the mRNA secondary structure formation as a potential genomic signal of the coupling between transcription and translation in Bacteria and Archaea (Fig. 4), we then sought to model and characterize the ramp in E. coli. We tested three mathematical functions to model the mean number of hydrogen bonds per codon as a function of codon position. According to Akaike information criterion (AIC) and Bayesian information criterion (BIC) data, the bounded exponential model with three parameters (initial content, rate, and carrying capacity) produced the best fit (Fig. 5A). The fitness of the model showed that the number of hydrogen bonds per codon follows an exponential function of codon position with a positive rate that has a ramp-like shape at the 5= end of CDSs. Testing the selection for reduced hydrogen bonding at the 5= end in E. coli. After determining that the ramp of hydrogen bonding can be better fitted by an exponential model, we further tested whether selection acts, through positiondependent codon usage bias, against uniform distribution of hydrogen bonds per codon along CDSs in E. coli. To test this hypothesis, we applied codon shuffling techniques (33,34) to generate 200 simulated ORFeomes of E. coli that contained random synonymous mutations. The codon-shuffled ORFeomes were used as a null model to test selection against uniformity using the 2 statistic (33,34).
The z 2 value (from the 2 statistic) per codon position showed that selection acted against uniform distribution of the number of hydrogen bonds and that selection was noticeably stronger at the 5= end of the E. coli ORFeome (Fig. 5B). Finally, we investigated the direction of selection acting on the 5= end of the E. coli ORFeome. To assess the selection direction, we computed the value for the -gram and found that selection acted to reduce the number of hydrogen bonds at the 5= end of CDSs in the E. coli ORFeome in a position-dependent manner (Fig. 5C).
Position-dependent hydrogen bonding consistently correlates with mRNA structure folding in Bacteria and Archaea but not Fungi. As local reduction of base pairing probability in mRNA facilitates translation initiation (35) ). Next, we tested whether the observed correlations between hydrogen bonding in CDSs and formation of the mRNA secondary structure could be a genomic signal in diverse genera of Bacteria and Archaea, but not eukaryotes, as part of the molecular mechanism that optimizes the coordination between transcription and translation (36). The expectation is that for genes of organisms whose transcription and translation are coupled in space and time (i.e., Bacteria and Archaea), the significant and strong positive correlation between the position-dependent mRNA secondary structure formation and hydrogen bonding should be found to be universally conserved. In contrast, the correlation in eukaryotes should be insignificant or weaker.
To investigate this issue, the position-dependent probabilities of pairing of mRNA and position-dependent hydrogen bonding of ϳ1,700 ORFeomes in the representative data set were computed. We discretized the correlation analysis by different regions of codon position (Fig. 6A) and found that the positive and strong correlation was conserved in Bacteria and Archaea regardless of the codon position region (Fig. 6B). However, despite an increase in the Pearson's ( Fig. 6B) and Spearman's ( Fig. S2A) median correlation values in Fungi as the codon position region was shortened, the correlation values were found to be Ͻ0.5 in the best-case scenario and much lower than those seen in Bacteria and Archaea. Overall, the correlation between the positiondependent probability of pairing of mRNA and position-dependent hydrogen bonding in Bacteria and Archaea is significantly stronger than that seen with eukaryotes ( Fig. 6C; see also Fig. S2B). While these two metrics are expected to correlate positively with one another, the consistently strong associations observed for Bacteria and Archaea provide new insights into the evolutionary coupling of transcription and translation through the position-dependent optimization of hydrogen bonding and mRNA pairing probability. Accordingly, we further investigated whether evolution preserves position-dependent hydrogen bonding in Bacteria and Archaea. The results of the test for selection against uniform distribution of hydrogen bonds per codon along CDSs on every ORFeome in the representative data set indicated that both the strength of the selection (Fig. 6D; Finally, we studied if the distribution of correlations between the positiondependent pairing probability of mRNA and position-dependent hydrogen bonding is associated with specific taxonomic classes and whether these classes show similar patterns of genomic GC and GC 3 content and mutational bias (i.e., GC 3 /GC) (Fig. S2C). As an outlier with respect to the correlation, the members of the bacterial class Mollicutes were found to contain a set of ORFeomes for which the correlation was weakly positive (Fig. S2C). Mollicutes also showed the lowest genomic GC and GC 3 content in the set of bacterial and archaeal ORFeomes analyzed (Fig. S2C). All other Bacteria and Archaea classes showed equally strong correlations but variable genomic GC and GC 3 content and mutational biases (Fig. S2C). Interestingly, the three fungal groups showing the highest median of the correlation distribution corresponded to the three classes with the lowest genomic GC and GC 3 content and a mutational bias value of Ͻ1.0 (Fig. S2C). The Fungi classes Malasseziomycetes and Tremellomycetes showed the strongest correlations between the position-dependent pairing probability of mRNA and position-dependent hydrogen bonding, but these correlations were negative, and no associations were found with the GC and GC 3 content and mutational bias (Fig. S2C). Overall, the results from this representative data set showed that positiondependent hydrogen bonding consistently correlates with mRNA structure folding in Bacteria and Archaea but not eukaryotes and that selection against uniform distribution of codons within CDSs acts on these bacterial and archaeal ORFeomes to reduce the number of hydrogen bonds in the first codons of CDSs.
Modeling the hydrogen bonding ramp in ORFeomes of Bacteria, Archaea, and Fungi. After we had successfully modeled the hydrogen bonding ramp in E. coli (Fig. 5A) and identified its association with the mRNA secondary structure formation as a potential genomic signal of the coupling between transcription and translation in Bacteria and Archaea but not in eukaryotes (Fig. 6A to C), we further investigated whether the bounded exponential ramp model can be universally fitted to diverse ORFeomes. To explore this issue, we compiled a comprehensive data set with ϳ14,500 ORFeomes that included Bacteria, Archaea, and Fungi from diverse phyla (Fig. 7A). The data set comprised ORFeomes with various numbers of CDSs (Fig. S4A), total lengths (Fig. S4B), mean CDS lengths (Fig. S4C), diverse GC 3 /GC ratios (Fig. S4D), and different mutational biases per phylum (Fig. S4E). We analyzed the position-dependent number of hydrogen bonds per codon of each ORFeome and found that in most Bacteria and Archaea (94% of Bacteria and 86% of Archaea), the number of hydrogen bonds per codon position could be successfully fitted by the bounded exponential model whereas the fit of this model was unsuccessful in most Fungi (85%) (Fig. 7B). Instead, the linear model produced a better fit for most of the fungal ORFeomes (Fig. S5A) and the subset of ORFeomes successfully fitted by the bounded exponential model was not monophyletic (Fig. S5B). We further investigated differences between the groups that successfully and unsuccessfully fitted the bounded exponential model, and only two significant different features were observed (Fig. S6). First, the total ORFeome lengths tended to differ between the two modeled groups in Bacteria and Fungi (Fig. S6A, P Ͻ 0.001); second, the mean lengths of CDS per genome were significantly different in Bacteria (Fig. S6B, P Ͻ 0.001). No differences were found for GC 3 /GC ratios (Fig. S6C). Scrutinized by phylum, only Aquificae and Nitrospirae showed major differences in genomic GC content (Fig. S6D) and mutational bias (Fig. S6E) between the two modeled groups (caused by outlier ORFeomes). For the outliers ORFeomes that could not be successfully modeled by the bounded exponential model, it was found that they had a relatively higher GC content and a higher GC 3 /GC ratio.
Once we established that the bounded exponential model could be fitted to most Bacteria and Archaea, we evaluated the statistical significance of the modeling by estimating the P value for the rate parameter (a strong indicator of the ramp) in each successful fitted model (Fig. S7A). We found that most of the rate parameter estimates for Bacteria (99.5%) and Archaea (91%) were significant (P Ͻ 0.001), while only eight were significant in the small subset of ORFeomes that were successfully modeled in Fungi (43 ORFeomes) (Fig. S7A). We further assessed whether the statistical significance of the rate parameter correlated with other molecular features (Fig. S7B). We found that the strongest correlation in Bacteria and Archaea was with the total length of the ORFeome and the number of CDSs per ORFeome (Pearson correlation coefficient, Fig. S7B). By linear regression modeling, we observed that ϳ30% of the variation in the statistical significance of the rate parameter can be explained by the variation in the number of CDSs in the ORFeomes of Bacteria and Archaea (R 2 ϭ 0.35 with P Ͻ 0.001 in Bacteria and R 2 ϭ 0.28 with P Ͻ 0.001 in Archaea, Fig. S7C).
Characteristics of the ramp of the number of hydrogen bonds in Bacteria, Archaea, and Fungi. Further characterization of the bounded exponential ramp model parameters (Fig. 7C) showed that significant differences (␣ ϭ 0.1% was adopted for the analysis due to the large sample size) were not observed in the estimated parameter of carrying capacity of hydrogen bonds between Bacteria, Archaea, and Fungi (Fig. 7D, adjusted P Ͼ 0.001). On the other hand, the estimated parameters of initial number of hydrogen bonds (Fig. 7E) and rate (Fig. 7F) were significantly different between all groups (adjusted P Ͻ 0.001). We observed that the initial number of hydrogen bonds was lowest in Bacteria (Fig. 7E), which is consistent with the rate of increase in the number of hydrogen bonds per codon being the highest in Bacteria (Fig. 7F) to reach carrying capacities that were not significantly different between all groups after the ramp (Fig. 7D). Hence, by linear regression modeling between the estimated parameters for initial content and carrying capacity, one can approximate the rapidity of the change in the average number of hydrogen bonds per codon given that the carrying capacity becomes steady at about the 20th codon position (Fig. 7G).
We further assessed the phylogenetic relatedness of the ramp rate (the indicator for the existence of the ramp of hydrogen bonding) for the ORFeomes in the representa- tive data set. A whole-genome phylogenetic tree was constructed, and the ramp rate was mapped to each branch (Fig. 7H). We observed that most of the phyla had similar median ramp rates (Fig. 7I), with Actinobacteria, Proteobacteria, Verrumicrobia, and Bacteroidetes showing the highest ramp rates among the bacterial phyla (Fig. 7I) and the phylum Thaumarchaeota having the highest ramp rates among the archaeal phyla (with a median value greater than that seen with some of the bacterial phyla) (Fig. 7I). Interestingly, the fungal phylum Microsporidia showed positive ramp rates and the median value was greater than that seen with some of the bacterial and archaeal phyla (Fig. 7I).
A Web-based application to analyze position-dependent hydrogen bonding. In order to facilitate the analysis of position-dependent hydrogen bonding of novel and custom ORFeomes, a Web-based graphical user interface (GUI) application was developed using the R package shiny (37). The application incorporates all the methods developed and implemented in this work. In a simple GUI (Fig. S8), the application allows interactive investigation of novel and customized ORFeomes, download of raw analysis and modeling data, and generation of high-quality figures. The application also reports summary statistics associated with modeling of hydrogen bonding per codon position by the bounded exponential model. For cases that cannot be successfully modeled, the application provides outputs that graphically represent the observed number of hydrogen bonds per codon position and a summary report of the analysis. The application is publicly available at https://juanvillada.shinyapps.io/hbonds/.
DISCUSSION
By first analyzing the ORFeome of E. coli as a model and subsequently over 14,000 bacterial, archaeal, and fungal ORFeomes, we found evidence for an exponential ramp of hydrogen bonding at the 5= end of CDSs in Bacteria and Archaea that is created by a position-dependent codon usage bias. With the methods used in this investigation, a similar ramp in fungal ORFeomes was not identified. From a resource allocation perspective, a ramp of hydrogen bonding found in Bacteria and Archaea may provide an energy-efficient mechanism in which the energy required to melt hydrogen bonds (38)(39)(40) and unwind dsDNA is gradually increased. It has been reported previously that AU-rich codons are selected for at the beginning of CDSs in E. coli and other organisms (35) and that genomic GC content shows positional dependency in diverse organisms (41), which would in turn reduce the local hydrogen bonding at the 5= end of CDSs. In contrast to previous studies where analyses were limited to characterizing only the first 15 to 20 codon positions (35) or a smaller set of ORFeomes (41), we analyzed a longer region of the 5= end of CDSs (100 or 250 codon positions) and a data set with thousands of ORFeomes that included all three domains of life. Hence, we managed to identify parameters that mathematically describe the formation of the hydrogen bonding ramp and the extent of its conservation in microbial ORFeomes.
We have provided evidence indicating that the CDSs occupying the first position of operons in E. coli have lower levels of hydrogen bonding than internal CDSs and that this preference is the most obvious in the first ϳ20 codons of the first CDS in an operon, suggesting that transcriptional efficiency might be favored at the beginning of the transcription unit (Fig. 2). By coupling hydrogen bonding and transcriptomics data of E. coli, we further showed that highly transcribed CDSs demand a lower maximum capacity of hydrogen bonds per codon, suggesting that the energetic requirement to unwind the dsDNA in highly transcribed CDSs has evolved to be minimized (Fig. 3). By contrasting position-dependent hydrogen bonding with codon usage metrics, we also showed selection acts to maintain tightly associated ramps in mRNA secondary structure and hydrogen boding in E. coli (Fig. 4) as well as generally in Bacteria and Archaea but not Fungi (Fig. 6). A parsimonious explanation for the existence of a ramp of hydrogen bonding in Bacteria and Archaea, but not Fungi, is that it is a molecular and evolutionary mechanism that optimizes the coupling of transcription and translation. Transcription and translation in Bacteria and Archaea are coupled in space and time (42), so the two processes influence each other. One such example can be found in the tight coordination maintained between transcription and translation in order to avoid premature termination of transcription (36). Therefore, it is reasonable to hypothesize that evolutionary traits may have been developed in order to optimally couple the transcription of protein-coding genes and the translation initiation of mRNA in Bacteria and Archaea. The ramp of hydrogen bonding might be one such trait that optimizes the efficiency of the coupling between transcription and translation (i.e., cotranscriptional translation efficiency) in Bacteria and Archaea. With a high level of cotranscriptional translation efficiency, dsDNA unwinding energy (i.e., hydrogen bonding) should be lower at the 5= end of CDSs than at regions downstream of the start codon. Subsequently, efficient initial elongation of transcription occurs, and the nascent mRNA molecules effectively couple to the translation machinery such that translation elongation begins effectively. In turn, translation also follows a ramp of efficiency in which ribosomes are effectively recruited due to the relatively lower mRNA secondary structure, and initial elongation begins relatively slowly according to the enrichment of nonoptimal and rare codons at the 5= end of CDSs (11,43,44).
In the proposed mechanism of cotranscriptional translation efficiency, although both transcription and translation seem to be mediated by an initial ramp, the ramps exhibit opposite efficiency. While a ramp of translation efficiency has been shown to start with a higher occurrence of nonoptimally translated codons at the 5= end as a mechanism to possibly reduce traffic jams of ribosome downstream in translation elongation (5,11,45,46), the ramp of hydrogen bonding found here at the same region starts with codons that reduce the energy required for unwinding dsDNA. Thus, the ramps of transcription and translation efficiency appear opposite but complementary in Bacteria and Archaea. This complementarity of speed can further reduce conflicts between the transcription and translation machineries (47).
From an evolutionary perspective, it will be interesting to further explore whether transcription or translation exerts a stronger selective pressure on local codon usage bias at the 5= end of ORFeomes as the genomic evidence presented here do not allow distinguishing which mechanism drives selection. Nevertheless, the results presented here support the notion that the energy requirements for unwinding dsDNA of a CDS could be modulated by controlling the usage of synonymous codons to tune the number of hydrogen bonds. Although we found that the mean rate of increase of the number of hydrogen bonds per codon of Bacteria and Archaea is clearly higher than that of eukaryotes, some eukaryotes still showed a nonnegligible rate. We hypothesize that this may represent a signal of a remnant ramp that was lost in eukaryotes with the evolutionary emergence of packaged genomic DNA in the nucleus and further decoupling of transcription and translation. There is evidence in the literature showing that some nuclear sites can still support coupled transcription and translation in eukaryotes (48).
Most lines of evidence in this study resulted from focusing on the model organism E. coli. In the future, computational and experimental work should further investigate position-dependent hydrogen bonding in diverse genera in the tree of life. Future investigations should also consider integrating transcript and protein abundance data to investigate the role of position-dependent hydrogen bonding in the overall mechanism of protein biosynthesis.
Overall, we report the existence of a ramp of the number of hydrogen bonds that follows a bounded exponential function at the 5= end of CDSs in Bacteria and Archaea. Optimization of cotranscriptional translation efficiency by reducing local hydrogen bonding can be another selective force driving the occurrence of AU-rich codons at the 5= end of CDSs (35). The present work does not debunk any of the established translation-related and mRNA secondary structure-based theories of positiondependent codon usage bias (11,12,35). Instead, the ramp of hydrogen bonding encoded by a genomic signal adds another layer to the complexity of codon biology. The proposed mechanism for cotranscriptional translation efficiency might be another factor in the multiobjective optimization of gene expression, but more evidence is required. The genomic evidence compiled here suggests that effective coupling of transcription and translation at the 5= end of CDSs of Bacteria and Archaea might be achieved by natural evolution via increasing the rate of occurrence of synonymous codons that also reduce hydrogen bonding, complementing the subtle effects of codons on the molecular biology of cells (2,6,13,33,45).
A comprehensive data set of ORFeomes (n total ϭ 14,511), including 13,921 Bacteria, 297 Archaea, and 293 Fungi, were retrieved from the NCBI/RefSeq (49). The commands used to compile the ORFeomes were "Latest RefSeq" and "Exclude anomalous." A smaller data set of representative ORFeomes (n total ϭ 1,766) was compiled that included all the Bacteria (n ϭ 1,176) from a previously curated list that has even representation across phyla (18) and all the Archaea (n ϭ 297) and Fungi (n ϭ 293) ORFeomes in the comprehensive data set.
For all ORFeomes analyzed in this work, CDSs with lengths not divisible by 3 and shorter than the number of codons analyzed (100 or 250) were removed from the data set. The start codon was removed from the data set before conducting any downstream analyses. The length, GC content of each CDS, and GC content of each nucleotide position within a codon (GC 1 , GC 2 , and GC 3 ) were calculated with SeqinR (50). Taxonomic affiliation of all downloaded ORFeomes was mapped using the XML file with the accession numbers of the ORFeomes and the table of lineages of all genomes deposited in NCBI. The table of lineages was generated using NCBItax2lin (https://github.com/zyxue/ncbitax2lin) with the NCBI taxonomy database (accessed February 2019). Information regarding the complete and representative ORFeome data sets can be found in Table S1 and Table S2 in the supplemental material, respectively.
Position-dependent number of hydrogen bonds. DNA sequences were analyzed using the R packages Biostrings (51) and SeqinR (50). Nucleotides in each coding sequence were arranged in a matrix with dimensions equal to the number of CDSs as the number of rows and with the number of codons analyzed as the number of columns. After quality control, all the CDSs in an ORFeome were left aligned from the 5= end. The number of hydrogen bonds was computed and stored in a matrix according to the nucleotide base composition of CDSs (adenine [A] ϭ thymine [T] ϭ 2; guanine [G] ϭ cytosine [C] ϭ 3). The number of hydrogen bonds at each codon position in an ORFeome was computed by calculating the mean and the 95% confidence interval of the mean with nonparametric bootstrapping (1,000 bootstraps) using the Hmisc (52) package in R. Matrix analysis and bootstrapping of thousands of ORFeomes were possible due to parallelization of the computational processes in multiple computer cores using the R packages foreach (53), doParallel (54), and doSNOW (55).
The relative number of hydrogen bonds was calculated as the observed content divided by the maximum number of hydrogen bond per amino acid. The scaled number of hydrogen bonds was calculated by centering and scaling the hydrogen bond contents of codons per amino acid using the scale function in R.
Analysis of hydrogen bonding in operons.
Operons of E. coli K-12 substrain MG1655 were delineated using the Operon-mapper Web server (56). The DNA_topLevel genomic sequence FASTA and the GFF files from EnsemblBacteria (bacteria.ensembl.org) were used as input. The number of codons to analyze per CDS was set to 100, and the minimum number of CDSs per operon was set to 2. All CDSs in the ORFeome were categorized according to their position within the operons, and all CDSs located at the same operon position were aligned by the start codon. The number of hydrogen bonds in CDSs of operons was quantified (i) in separate regions of 20 codons up to the 100th codon position, (ii) from codon position 1 to position 100, and (iii) over the entire length of CDSs.
Quantification of hydrogen bonding in highly and minimally expressed CDSs. Transcriptomic data (48 independent sets) generated from 16 different RNA-Seq experiments using E. coli K-12 substrain MG1655 in triplicate (32) were downloaded from the Gene Expression Omnibus (accession no. GSE45443). The transcript abundance estimates (in reads per kilobase per million [RPKM]), calculated using Rockhopper software, were retrieved from the reference (32) and then mapped to the E. coli K-12 substrain MG1655 genomic sequences obtained from GenBank (accession no. U00096.3). CDSs in each of the 16 experiments were ranked according to their transcript abundances, and the CDSs that appeared in all 16 experiments at above or below the desired expression level threshold were grouped using the Reduce function in R for downstream quantification of hydrogen bonding. Six corresponding pairs corresponding to high expression threshold levels (i.e., top 5%, 10%, 15%, 20%, 25%, and 30%) and low expression threshold levels (i.e., bottom 13%, 18%, 23%, 26%, 30%, and 35%) were examined. The expression thresholds of the minimally expressed CDSs were set at levels that allowed similar numbers of CDSs to be compared against the corresponding highly expressed CDSs. The start codon was removed from all CDSs before quantification of the number of hydrogen bonds up to the 100th codon position. The mean number of hydrogen bonds per codon position of all CDS in each group was fitted with the locally estimated scatterplot smoothing (LOESS) nonparametric regression model.
Position-dependent occurrence of frequent codons and optimal codons. The positiondependent occurrences of frequent codons and of rare codons were computed with relative synonymous codon usage (RSCU) values (57), and the frequencies of optimal codons were computed with codon adaptation index (CAI) values (57). RSCU and CAI values were calculated as described previously (33) except that the geometric mean was not computed for each CDS. Instead, each codon was assigned a value according to the table of codon usage calculated with the function uco in SeqinR (50). By default, codons containing an undetermined nucleotide (N) were assigned the value "1" (no bias). RSCU and CAI values corresponding to every codon position of an ORFeome were calculated as the mean and the 95% confidence interval of the mean with nonparametric bootstrapping (1,000 bootstraps).
Position-dependent translation efficiency. Position-dependent translation efficiency was estimated with tRNA adaptation index (tAI) values (58). Position-dependent tAI values were calculated using the s vector as s prokaryote ϭ (0, 0, 0, 0, 0.41, 0.28, 0.9999, 0.68, 0.89) for Bacteria and Archaea and s eukaryote ϭ (0, 0, 0, 0, 0.41, 0.28, 0.9999, 0.68, 0.89) for Fungi as suggested previously (59). CodonR, the original algorithm used to compute tAI values (github.com/mariodosreis/tai), was customized in R to retrieve the value of every codon in a position-dependent manner. tRNA data sets for model organisms were obtained from the genomic tRNA database (GtRNAdb) (v2.0) (60) and the tRNA gene database curated by experts (tRNADB-CE) (v12.0) (61). The matrix of codon usage to compute tAI was obtained with CodonM (github.com/mariodosreis/tai/blob/master/misc/codonM). The parameter sking in the get_ws function was set to a value of 0 for eukaryotes and a value of 1 for Bacteria and Archaea. The tAI value in every codon position of an ORFeome was calculated as the mean and the 95% confidence interval of the mean with nonparametric bootstrapping (1,000 bootstraps).
Position-dependent mRNA secondary structure. The mRNA secondary structure was predicted by calculating the probability of a base being unpaired in the mRNA molecule using the program RNAplfold (v2.4.14) from the ViennaRNA package 2.0 (62) with the parameters L ϭ 40, W ϭ 40, and u ϭ 40. Data representing secondary structure probabilities were parsed to R objects using a previously described method (63). The mean probability of each base being unpaired was calculated as the mean of all probabilities of a base being unpaired in any position, and the probability of a codon being unpaired was calculated as the mean of its number of bases. The probability of a codon forming a secondary structure in the mRNA molecule was calculated as the difference between 1 and its probability of being unpaired. The probability of a codon forming a secondary structure in every codon position of an ORFeome was calculated as the mean and the 95% confidence interval of the mean with nonparametric bootstrapping (1,000 bootstraps).
Model fitting. The uniform model [y͑x͒ ϭ A], linear model [y͑x͒ ϭ Bx ϩ C], and bounded exponential model (equation 1) were used to model the mean number of hydrogen bonds per codon as a function of codon position (starting from the 2nd codon position).
In the models, y is the mean number of hydrogen bonds and x is the codon position; A is the carrying capacity of hydrogen bonds, defined as the maximum average number of hydrogen bonds that a particular codon position can contain in an ORFeome; B is the rate of hydrogen bonds per codon, defined as the change in the number of hydrogen bonds per codon; and C is the initial content, defined as the number of hydrogen bonds at the first codon after the start codon.
The models were fitted to hydrogen bonding data concerning the first 100 codon positions as the independent variable and the mean number of hydrogen bonds as the dependent variable. Self-Starting Nls Logistic Model was used to estimate the initial parameters, and weighted least-squares for a nonlinear model was used to estimate the final parameters (both were computed in R). As described previously (34), the Akaike information criterion (AIC) and Bayesian information criterion (BIC) were used to select the model that best fitted a data set. In cases in which the exponential model could not be successfully fitted but parameters were needed for downstream analyses, the initial content and carrying capacity parameters were calculated, respectively, as the minimal number of hydrogen bonds among all codon positions per ORFeome and the trimmed mean number of hydrogen bonds among all codon positions calculated after filtering out 20% of the codons (10 codons from each end).
Phylogenomic analysis. The translated CDSs of the representative data set of ORFeomes were used to construct a phylogenetic tree using the large-scale phylogenetic profiling of genomes method in PhyloPhlAn2 (bitbucket.org/nsegata/phylophlan/wiki/phylophlan2). The supermatrix_aa config file was used as input to build the tree with the parameters diversityϭhigh and databaseϭphylophlan. The ramp rates estimated from the exponential bounded model were mapped to each branch of the tree using ggtree (64) to integrate the phylogeny and hydrogen bonding parameters.
Building the position-dependent null models of ORFeomes with shuffled codons. The null model to test selection against uniform distribution of codons was built by shuffling synonymous codons within all CDSs in each ORFeome. A total of 200 simulated ORFeomes were built for each of the 1,496 ORFeomes (only Bacteria and Archaea) in the representative data set from which we obtained the metrics of expected and standard deviation of the number of hydrogen bonds per codon position as described in detail elsewhere (33). Having the observed and expected occurrence of the number of hydrogen bonds per codon, we then computed the z 2 of the 2 statistic as shown in equation 2: Statistics, data analysis, and data visualization. Data analysis was conducted in R (v3.6.0) using RStudio (v1.2.1335). The R package tidyverse (65) was used for data analytics, ggplot2 (66) for data visualization, and cowplot (67) for assembling multiple figure panels. Unless otherwise specified, differences between sample groups were tested using two-sided, nonpaired Wilcoxon rank sum test (Mann-Whitney test). The Kruskal-Wallis test was applied in the operon analysis to test the statistical significance of the differences in the number of hydrogen bonds between CDSs of each region. Correction of P values in multiple testing was done with the Benjamini and Yekutieli method (68). Pearson's product-moment coefficient was used for linear correlation analyses, and Spearman's statistic was used to estimate a rank-based measure of association. Spearman's was also used in the correlation network analyses. A generalized additive model (GAM) was used to describe the position-dependent hydrogen bonding as a function of the probability of mRNA secondary structure formation. Scaled -gram values were calculated by centering and scaling each ORFeome. Normalized z 2 values were computed using the min-max normalization function for each ORFeome (equation 4) as follows: where x is the -gram value (equation 3), min x is the minimum -gram value of an ORFeome, and max x is the maximum -gram value of an ORFeome.
Code and data availability. The scripts required to reproduce all the results and figures can be obtained from https://github.com/PLeeLab/H_bonds_ramp. We developed a Web application (https:// juanvillada.shinyapps.io/hbonds/) for users to analyze the position-dependent content of hydrogen bonding of ORFeomes.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. | 2020-08-13T10:04:58.372Z | 2020-08-11T00:00:00.000 | {
"year": 2020,
"sha1": "f8020d1026af75dadc337a3d7a5f8a82c1f64cd3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/msystems.00613-20",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "1c51c20258402e710e7d38cb119c9f2f002c5bcb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
15583471 | pes2o/s2orc | v3-fos-license | Distal pancreatectomy with splenectomy for the management of splenic hilum metastasis in cytoreductive surgery of epithelial ovarian cancer
Objective Distal pancreatectomy with splenectomy may be required for optimal cytoreductive surgery in patients with epithelial ovarian cancer (EOC) metastasized to splenic hilum. This study evaluates the morbidity and treatment outcomes of the uncommon procedure in the management of advanced or recurrent EOC. Methods This study recruited 18 patients who underwent distal pancreatectomy with splenectomy during cytoreductive surgery of EOC. Their clinicopathological characteristics and follow-up data were retrospectively analyzed. Results All tumors were confirmed as high-grade serous carcinomas. The median diameter of metastatic tumors located in splenic hilum was 3.5 cm (range, 1 to 10 cm). Optimal cytoreduction was achieved in all patients. Eight patients (44.4%) suffered from postoperative complications. The morbidity associated with distal pancreatectomy and splenectomy included pancreatic leakage (22.2%), encapsulated effusion in the left upper quadrant (11.1%), intra-abdominal infection (11.1%), pleural effusion with or without pulmonary atelectasis (11.1%), intestinal obstruction (5.6%), pneumonia (5.6%), postoperative hemorrhage (5.6%), and pancreatic pseudocyst (5.6%). There was no perioperative mortality. The majority of complications were treated successfully with conservative management. During the median follow-up duration of 25 months, nine patients experienced recurrence, and three patients died of the disease. The 2-year progression-free survival and overall survival were 40.2% and 84.8%, respectively. Conclusion The inclusion of distal pancreatectomy with splenectomy as part of cytoreduction for the management of ovarian cancer was associated with high morbidity; however, the majority of complications could be managed with conservative therapy.
INTRODUCTION
Ovarian cancer is a leading type of life-threatening malignancy and accounts for over 140,000 deaths per year worldwide [1]. Cytoreductive surgery combined with platinum-based chemotherapy is the standard treatment for ovarian cancer. Prospective clinical trials and retrospective studies have revealed that optimal cytoreduction improves survival and that postoperative residual disease is one of the most prominent prognostic parameters [2][3][4]. To achieve optimal cytoreduction, the use of extensive upper abdominal surgery has become widely accepted by gynecologic oncology surgeons [5].
The splenic hilum occupies a relatively low position in the left upper quadrant of the abdomen when a patient is in the supine position. In advanced epithelial ovarian cancer (EOC), tumor cells in ascites may travel to and implant in this region. Thus, during cytoreductive surgery for primary or recurrent EOC, metastasized tumors in the splenic hilum are occasionally encountered. These metastases may be difficult to dissect from the pancreatic tail and spleen. In such cases, removal of the pancreatic tail and spleen is required to achieve complete resection of the metastatic tumor for optimal reduction. However, this surgical procedure is rarely performed because surgeons tend to have limited experience with the proximal anatomy and limited knowledge of the associated morbidity and mortality. To date, limited retrospective studies of small case series presenting the incidence of pancreatic fistula following splenectomy and distal pancreatectomy have been published [6][7][8]. More studies are needed to evaluate the benefits and risks of performing this procedure for the management of splenic hilum metastasis during cytoreductive surgery of advanced and recurrent EOC. At Fudan University Shanghai Cancer Center (FUSCC), gynecologic oncology subspecialists have been performing this surgical procedure in collaboration with upper gastrointestinal surgeons since 2008. As experience with the procedure has increased, gynecologic oncologists have begun performing the surgery independently. The present study presents our clinical experiences associated with the procedure over the past 6 years.
Patients
The current study was performed retrospectively and was approved by our center's Institutional Review Board (SCCIRB-090371-2). Cytoreductive surgeries were carried out for approximate 4,100 patients with advanced or recurrent EOC at FUSCC between April 2009 and September 2015. During this period, 91 splenectomies without indication for distal pancreatectomy were performed. A total of 18 consecutive patients, who underwent distal pancreatectomy with splenectomy, were recruited in this study. Their clinicopathologic characteristics, operation data, and postoperative events were obtained by reviewing inpatient medical records. Disease progress and recurrence and survival data were obtained from outpatient medical records. The patients were followed up until October 31, 2015. The histopathologic features of the metastatic tumors found in the splenic hilum were reviewed by the pathologist (X. Shen).
Surgical techniques
After dissection of perisplenic ligaments and short gastric vessels, the splenic artery and vein were isolated and separately ligated. Distal pancreatic tissue was carefully dissected to isolate the major pancreatic duct to the greatest extent possible. The major pancreatic duct was ligated or sutured separately if visible. The pancreatic tail, spleen and metastatic tumors within the splenic hilum were removed en bloc. About 5% to 30% of distal pancreases were dissected. The transected end of the pancreas was sewn in the way of interrupted mattress suture using 4-0 absorbable sutures. After suturing, one to three drainage tubes were placed around the remaining pancreas.
Postsurgical observation and management
A broad-spectrum antibiotic combined with an agent against anaerobic bacteria, such as cefuroxime plus metronidazole, were tripped intravenously in the first and second postoperative days. Octreotide (0.1 mg) was injected subcutaneously every 8 hours in the first 5 days for seven patients. Amylase in drainage fluids was tested every second day to monitor pancreatic fistula. The patients began to eat fluid diet after the first flatus. The peripancreatic drainage tubes were removed if the following prescriptions were satisfied: (1) the patients had no fever and no upper abdominal pain after diet; (2) there was no fluid in drainage tubes for more than 1 day or the value of amylase in drainage fluid was normal. The patients begin systematic chemotherapy when they had normal diet and their Eastern Cooperative Oncology Group (ECOG) performance status score was no more than 2.
Statistical analysis
Disease stage was defined according to the 2014 The International Federation of Gynecology and Obstetrics (FIGO) ovarian cancer staging guidelines. Optimal cytoreduction of EOC was defined as <1 cm residual tumor volume. Pancreatic fistula was defined and classified according to International Study Group of Pancreatic Fistula criteria [9]. Descriptive statistics were employed in this study. Inter-group comparisons were performed using Fisher exact test. Progression-free survival (PFS) and overall survival (OS) were assessed using the Kaplan-Meier method. PFS was defined as the interval between cytoreductive surgery and recurrence or progression of the disease, and OS was defined as the interval between surgery and death. Significance was defined as p<0.05. All statistical analyses were performed using SPSS ver. 16.0 (SPSS Inc., Chicago, IL, USA).
Clinicopathological characteristics
The median age of the 18 enrolled patients was 54.5 years (range, 39 to 75 years). Nine patients underwent distal pancreatectomy with splenectomy for primary EOC, and the remaining nine patients underwent the procedure for recurrent disease. The ECOG performance status of the patients were 0 or 1. Metastatic tumors in the splenic hilum were preoperatively identified via computed tomography in eight patients and via positron emission tomography/computed tomography in seven patients (Fig. 1A, B, respectively). In the remaining three patients, preoperative radiology did not reveal metastatic tumors in the splenic hilum. FIGO staging, serum cancer antigen 125 levels, and histological subtypes of the tumors are shown in Table 1. Table 2 depicts the surgical parameters associated with the primary group and the recurrent group patients. Widely disseminated carcinomas in the abdominopelvic cavity were found in all nine patients with primary disease and in three of the patients with recurrent disease. These patients underwent extensive upper-abdominal, middle-abdominal, lower-abdominal, Distal pancreatectomy in advanced ovarian cancer or retroperitoneal cytoreductive surgeries, including total abdominal hysterectomy, bilateral salpingo-oophorectomy, omentectomy, distal pancreatectomy with splenectomy, bowel resection, appendectomy, stripping of the diaphragm or other peritoneal surfaces, pleural tumor resection, pelvic and para-aortic lymphadenectomy, or groin lymphadenectomy. Localized tumors in the spleen and pancreas were explored in the remaining six patients with recurrent disease. All 18 patients underwent distal pancreatectomy with splenectomy. The median surgical duration was 3.0 hours (range, 1.0 to 5.2 hours). Seven patients experienced blood loss of a volume greater than 1,000 mL. Ten patients underwent blood transfusions with a median volume of 1,150 mL (range, 500 to 1,800 mL). Eight patients required postoperative Intensive Care Unit admission. The median hospitalization period following surgery was 9 days (range, 6 to 17 days). The median interval from surgery to adjuvant chemotherapy was 18 days.
Residual disease
The median diameter of the metastatic tumors found within the splenic hilum was 3.5 cm (range, 1 to 10 cm) (Fig. 1C, D). Histological examination confirmed the presence of capsular and parenchymal tumors in the spleen and/or pancreas. All 18 patients achieved optimal cytoreduction, with microscopic residual disease in 11 patients, residual tumors less than 0.5 cm in diameter in four patients, and residual tumors between 0.5 and 1 cm in diameter in three patients. The residual carcinomas were located in the porta hepatis, small bowel mesentery, intestinal wall, and thoracic cavity.
Postoperative complications
Eight patients (44.4%) suffered from postoperative complications, including pancreatic fistula, pleural effusion and atelectasis, encapsulated effusion in the left upper quadrant, intra-abdominal infection, hemorrhage, pneumonia, pancreatic pseudocyst, and intestinal obstruction ( Values are presented as number (%) or median (range). * In remaining patients, metastatic tumors adhered to pancreatic capsule and spleen and cannot be divided from pancreatic tails and spleen. Distal pancreatectomy with splenectomy was performed to avoid damage of pancreatic capsule and to reduce unexpected pancreatic fistula. and the remaining patient had a grade B pancreatic fistula and required prolonged intraabdominal drainage and hospitalization. Transient reactive thrombocytosis and leukocytosis were observed in all of the patients; however, no related adverse clinical consequences were observed. No cases of clinically apparent new onset diabetes or uncontrollable infection were observed following surgery through the end of the study.
Follow-up data
All patients were treated with six to eight cycles of postoperative platinum-based chemotherapy. The median follow-up duration was 25 months (range, 3 to 68 months). Nine patients (50.0%) experienced recurrence of EOC after distal pancreatectomy and splenectomy. Three patients died of the disease, and 15 patients were still alive at the end of the study. The 2-year PFS and OS were 40.2% and 84.8%, respectively (Fig. 2).
DISCUSSION
Over 70% of patients with EOC initially present at an advanced stage. The majority of these cases have upper abdominal metastasis. The necessity of performing upper abdominal surgery in such cases has been increasingly accepted by gynecologic oncologists. Splenic hilum metastasis is not rare in cases with metastasis in the left upper quadrant of the abdomen. As pancreatectomy and splenectomy can significantly increase postoperative morbidity [10] and because the associated complications can cause serious and even fatal consequences [11,12], the vast majority of gynecologic surgeons choose to leave these metastatic tumors alone. In such cases, optimal cytoreductive surgery is impossible. Thus, it is necessary to assess the risks and benefits of performing these surgical procedures as a component of cytoreductive surgery for the treatment of primary and recurrent EOC. The present study is the largest series published to date that specifically addresses distal pancreatectomy with splenectomy for the management of splenic hilum metastasis during cytoreductive surgery of EOC.
Splenic hilum metastasis of EOC has been considered an obstacle to optimal cytoreduction in most institutes. In recent years, various preoperative prediction models for optimal and suboptimal cytoreductive surgeries have been used to evaluate newly diagnosed advanced EOC. Tumor extension into the spleen or pancreas is a predictor of suboptimal cytoreduction in all models. However, optimal cytoreduction rates during upfront surgeries vary among 6/9 http://ejgo.org https://doi.org/10.3802/jgo.2016.27.e62 Distal pancreatectomy in advanced ovarian cancer [8]. Similar to these surgeons, we began performing this procedure with the aid of upper gastrointestinal surgeons to help overcome the associated learning curve.
The surgeons at our institute agree that removal of the pancreatic tail and spleen is appropriate in EOC patients with splenic hilum metastasis in cases in which these patients can achieve optimal cytoreductive results following this procedure. However, in patients with unresectable bulky tumors, it is not worth conducting such a high-risk procedure. Thus, the decision to perform this procedure should be made after careful exploration of the abdominopelvic cavity and after assessment of the resectability of tumors present in the pelvis, middle-abdomen and upper-abdomen. The procedure can be performed after the removal of all bulky tumors located in other anatomic structures. In the present study, the median diameter of the tumors resected from the splenic hilum was 3.5 cm, and all of the patients achieved optimal cytoreduction after distal pancreatectomy and splenectomy. Some of the patients presented with residual disease (less than 1.0 cm) in the porta hepatis or thoracic cavity. Other patients had small, disseminated tumors in the small bowel mesentery and intestinal wall. At present stage, tumors in these anatomic regions are considered to be unresectable.
The postoperative morbidity associated with distal pancreatectomy and splenectomy was high (44%). However, the majority of these complications were mild and just required pharmacotherapy or observation only. Pancreatic leakage was the most common complication associated with distal pancreatectomy. The incidence of this complication is approximately 25% according to previous studies and the current study [6]. Only one out of 18 patients had a grade B pancreatic fistula and no patients had grade C fistula in our series. All the patients with pancreatic fistula in the current study recovered after conservative treatment; these treatments included prolonged drainage, late recovery of oral intake and total parenteral Progression-free survival nutrition or the use of octreotide therapy ( Table 3). To prevent its occurrence and reduce its damage, the gynecologic oncology surgeons at MSKCC utilize vascular staplers to seal the tail of the pancreas [8]. Diener et al. [14] suggested that stapler closure did not reduce the rate of pancreatic fistula compared to hand-sewn closure for distal pancreatectomy. In our opinion, separate ligation of the major pancreatic duct may be helpful in preventing the development of grade B or C pancreatic fistulas. Adequate peripancreas drainage may be useful to reduce the life-threatening adverse effects associated with pancreatic fistula.
Other severe complications related to distal pancreatectomy and splenectomy are relatively low (as shown in Table 3). No cases of postoperative clinical diabetes were observed after partial removal of the pancreas in the current study. King et al. [13] suggested that the rate of new onset diabetes after distal pancreatectomy is minimal. Thrombocytosis and leukocytosis were observed in all of the patients. However, overwhelming post-splenectomy infection (OPSI) was not observed in this series of patients. It has been reported that the majority of OPSI cases occur in infants and children. Intra-abdominal infection without intestinal perforation is easily controlled by the administration of broad-spectrum antibiotics. Encapsulated effusion or the development of asymptomatic pancreatic pseudocyst does not always require special intervention, as the majority of such complications disappear spontaneously. Left pleural effusion and atelectasis were treated by catheterization. When treating these complications, systemic chemotherapy was delayed in only one of the 18 patients.
Because of the limited cases number and the limited duration of our follow-up, the survival benefits of distal pancreatectomy with splenectomy cannot be assessed in the current study. However, evidence has shown that aggressive resections of upper-abdominal metastatic tumors may not only improve survival but also enhance quality of life [5,15].
In conclusion, performing distal pancreatectomy with splenectomy during cytoreductive surgery is associated with a high morbidity rate; however, the majority of associated complications can be managed using conservative therapy. Metastasis to the splenic hilum is likely not an insuperable obstacle for optimal cytoreductive surgery of EOC. | 2017-10-16T08:58:54.303Z | 2016-08-02T00:00:00.000 | {
"year": 2016,
"sha1": "053b4dfe6af8e9383bbe48be3c5879d62eb0fa9e",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5078825?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "68fcc24392d972da91b565dc9ad696698e01b1da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225714214 | pes2o/s2orc | v3-fos-license | Mental training program in racket sports: A systematic review
The mental aspect is largely acknowledged by athletes and coaches as a salient factor explaining performance variability. The mental component of performance holds a special place in racket sports considering the inherent demands in such intense and emotional activities. The importance of mental skills in racket sports has been examined within the literature through a bulk of studies highlighting associations between mental skills and a wide range of positive outcomes. Access to the programs which aim to improve the mental skills of the athletes represents a major issue for researchers and the different stakeholders (coaches, athletes, parents). The main objectives of this study were to (a) Collect the studies that incorporate mental training programs used in racket sports, (b) Organize the current knowledge on mental training programs and provide a synthesis of the characteristics of these studies, and (c) Identify the gaps in the literature on this topic and propose potential further investigations and practical implications. The present systematic review included 27 studies which involved 715 participants. Most of the studies used a quantitative approach and were conducted on tennis. The mental skills developed varied across the studies with domination of imagery and relaxation techniques. Overall, the programs led to positive outcomes on performance indicators (e.g. improvement of service efficacy and stroke quality) and permitted the development of the targeted mental skills (e.g. concentration, motivation). This review highlighted the weak representation of females and novice players within the studies’ participants. Moreover, the unequal representation of the techniques and outcomes in the examined studies encourages the development of further mental programs specifically applied to the demands of racket sports and a focus on different mental skills (e.g. emotional intelligence, coach education).
Mental training program in racket sports: A systematic review
Racket sports refer to the physical activities involving rackets to strike a ball or a shuttlecock (Lees, 2003). These activities include some popular sports such as tennis, table-tennis, badminton, and squash but also new activities such as paddle tennis or racketlon. Previous studies have highlighted the complexity of these sports due to the role of a wide variety of factors involved in performance variability (Lees, 2003). To perform in a racket sport, an athlete has to develop technical, physiological, tactical, and mental skills. Athletes and coaches largely acknowledge that the mental aspect is a salient factor and should be trained in the same way as physical or technical components (Jones, 1995). The development of sports sciences and the growing number of studies focused on elite performance led to the implementation of training programs oriented on specific components of sport performance (Kondric, Matković, Furjan-Mandić, Hadzić, & Dervisević, 2011). In this way, mental training refers to the training dedicated to mental skills which refer to internal competences that help the athletes in their goals by learning to manage their psychological states in keeping with their objectives. Mental training mainly aims to improve the well-being and performance level of athletes (Behncke, 2004;Morais & Rui Gomes, 2019). Mental training in sport settings consists of several stage (Terry, Coakley, & Karageorghis, 1995). First, the mental trainer (coach, sport psychologist) should assess the initial skills of the athlete. Second, a mental training program is usually proposed in order to develop targeted mental skills. The programs are composed of intervention sessions including one or many techniques such as relaxation, imagery practice, or cognitive behavioural therapies (Jones, 1995). Third, partial and complete evaluations inform the development and use of mental skills. In the same way as for physical training, the preparation should be suitable for the demands of the activity (Mamassis & Doganis, 2004). Consequently, in order to implement mental training, an investigation of the specific skills inherent to the demands of racket sports has to be realized.
Racket sports demands
Racket sports are associated with specific constrains which differ from other individual sports and involve particular training demands (Dohme, Bloom, Piggott, & Backhouse, 2019;Kondric et al., 2011). An analysis of the sport characteristics could identify the main mental demands and lead to identify the key skills to develop among racket sport athletes. First, a crucial characteristic of racket sports is the speed of the ball/shuttlecock and, in turn the associated required accuracy of all strokes played (Akpinar, Devrilmez, & Kirazci, 2012). These parameters limit the margin of error for each stroke and impose composure in stressful situations to prevent the errors (Ducrocq, Wilson, Smith, & Derakshan, 2017). These sports also require the learning of motor skills (e.g., accurate and powerful strokes) and are characterised by an important volume of training (important number of repetitions) (Doherty, Martinent, Martindale, & Faber, 2018). Consequently, an athlete practicing racket sport should have to be prepared to invest resources (sport motivation) despite the physical, psychological, and social constraints inherent to the practice of racket sports (Martinent, Decret, Guillet, & Isoard-Gautheur, 2014;Martinent & Decret, 2015). During competitive matches, racket sport players perform a series of repeated short and intense efforts (Kondric et al., 2011). Moreover, a competitive game is composed of successive matches across several competitive days. The players should continually project into future points or future games and should thus avoid ruminating about previous situations, behaviours and/or results (emotional regulation). Another major issue of racket sports is the presence of an opponent (Bebetsos & Antoniou, 2003;Caserta, Young, & Janelle, 2007;Poizat, Bourbousson, Saury, & Sève, 2009). The duel is thus central in the performance variability and every player has to focus on the reactions, choices, and behaviours of the other competitor.
Mental skills
In line with the exploration of the main characteristics of racket sports, a variety of key skills were revealed within a large body of literature (Crespo & Reid, 2007;Gould, Lauer, Rolo, Jannes, & Pennisi, 2008;Lees, 2003;Martinent, Cece, Elferink-Gemser, Faber, & Decret, 2018;Riemer & Chelladurai, 1998). This knowledgebase provides the basis for developing mental training programs. A review has highlighted the role of the mental toughness in major racket sports (Lees, 2003) involving targeted skills (e.g. motivation, emotional control, self-confidence). In particular, considering the daily training demands of these activities, motivation has been identified as an essential factor of the long-term performance and continued participation in tennis (Crespo & Reid, 2007) and table tennis (Martinent, Cece, Elferink-Gemser, Faber, & Decret, 2018). During the career, determination and enthusiasm have been identified as factors of performance (Lees, 2003). The environment has also been identified as a determinant of well-being and performance in racket sports especially considering the impact of parents (Gould et al., 2008;Harwood & Knight, 2009) and coaching leadership (González-García, Martinent, & Trinidad, 2019;Kwon, Pyun, & Kim, 2010;Riemer & Chelladurai, 1998;Sharma, 2015) on the athletes' outcomes, behaviours, and performance. Due to the competitive format and the necessity to repeat efforts despite the errors or under-performance, selfconfidence has been identified as a salient factor in racket sports studies such as tennis (Covassin & Pero, 2004) and badminton (Bebetsos & Antoniou, 2003). Moreover, the stressful nature of matches in racket sports leads to special attention to the athletes' emotional skills. In particular, emotional control (emotional regulation) has been revealed as a central skill in racket sports such as table tennis (Martinent & Ferrand, 2009;Martinent, Ledos, Ferrand, Campo, & Nicolas, 2015;Sève, Ria, Poizat, Saury, & Durand, 2007) and tennis (Bolgar, Janelle, & Giacobbi, 2008;Laborde, Lautenbach, Allen, Herbert, & Achtzehn, 2014). Similarly, the racket sports literature has also mentioned anxiety control and use of coping strategies as predictors of various outcomes such as well-being and competitive performance in badminton (Bebetsos & Antoniou, 2003), tennis (Bolgar et al., 2008), squash (Mace & Carroll, 1986) and table tennis (Laborde et al., 2014;Martinent & Decret, 2015). Other skills consistent with the high requirements of accuracy and velocity of the racket sports strokes have been highlighted. The literature has provided evidence of the importance of attention control skills in badminton (Bastug, Ağilönü, & Balkan, 2017) and table tennis (Caliari, 2008). In the same way, flow and awareness skills have been related to racket sports performance (Koehn, Morris, & Watt, 2013;Wolf et al., 2015). Finally, based on the rationale that racket sports can be categorised as open skills activities, previous studies have revealed the main role of decision making (del Villar, González, Iglesias, Moreno, & Cervelló, 2007;Hastie, Sinelnikov, & Guarino, 2009) and mental quickness (Williams, Ward, Smeeton, & Allen, 2004) in performance variability.
The present study
Racket sports characteristics lead to specific training requests for these activities (Lees, 2003;Mamassis & Doganis, 2004). Simultaneously, the mental aspect of performance has become a central preoccupation for athletes and coaches (Jones, 1995;Lees, 2003). The relevance of the mental skills in racket sports has been proved through a vast body of literature highlighting associations between mental skills and positive outcomes such as performance and well-being (Jones, 1995;Lees, 2003). For both researchers and practitioners, it seems important to disseminate the studies implementing mental training programs designed to improve athletes' mental skills. However, to the best of our knowledge, no study has summarized the research dealing with mental training programs in racket sports. As such, the main aims of this study were to (a) collect the studies that incorporate mental training programs used in racket sports, (b) organize the current knowledge on mental training programs and provide a synthesis of the characteristics of these studies, and (c) identify gaps in the literature on this topic, and propose potential further investigations and practical implications.
Procedure
The electronic search was performed via EbscoHost. Three databases were used including PsyARTICLES; PsyINFO; SPORTDiscuss and the following keywords were researched (within the title and abstract): Mental, psychological, racket, tennis, table-tennis, badminton or squash. The reference lists of all articles obtained were also examined for other relevant studies.
The studies should respect the following inclusion criteria to be included in the study: (1) electronicallyaccessible in the english language; (2) publication in a scientific peer-reviewed journal; (3) original studies with a specific mental training program presented and tested in the study; and (4) applied exclusively on one or many racket sports. The inclusion/exclusion procedure of the present study respected the systematic review process and is summarised in Figure 1. The first search revealed 565 references. We choose to not include timeframe selection criteria because of the limited number of studies on the thematic of mental training programs in racket sports. Then, 492 references remained with the removal of duplicates. We assessed the electronic abstract of the references and 357 were removed due to non-compliance of inclusion criteria. This level of loss could be explained by the importance of studies exploring the associations between mental skills and performance without a mental training program. Then, the electronic full-text articles were assessed, and 27 references remained after the final assessment for eligibility.
Data extraction
The selected studies were classified according to date of publication, sample characteristics (sample size, gender, competitive level), sport studied, mental training techniques, the goal of the program, and outcomes of the program. The results were analysed using descriptive statistics including distribution with the software Statistica (Hilbe, 2007). The characteristics of the studies were summarised in Table 1. Enhancing Forehand Acquisition in Table Tennis: The role of Mental Practice Table Tennis Performance Table Tennis Players.
General characteristics
Twenty of the 27 studies (74.1%) examined were published between 2005 and 2019. Seven studies were published before 2005 (25.9%) and only three before 2000 (11.1%). The journals the most frequent were Perceptual and Motor Skills, The Sport Psychologist, and Journal of Applied Sport Psychology.
A total of 715 participants (344 males and 112 females) were included in the 27 studies with an average of 23.19 participants per study. Only two studies presented a sample size superior to 50 subjects (Caliari, 2008;Robin et al., 2007) and the majority had a sample size between 10 and 50 participants (n = 19). The gender distribution revealed eight studies with males, three with females, and eleven with both genders (participants' gender was not mentioned in five studies). The age of the participants ranged between 6 and 63 years old. Most of the studies comprised children between 7 and 13 years old (n = 8) or adolescents and young adults between 14 and 22 years old (n = 8) (age of the participants was not mentioned in two studies; Caliari, 2008;Robin et al., 2007). Eleven of the 27 studies concerned elite, expert, or international athletes. In contrast, seven studies mentioned beginner, novice, or recreational players. Finally, nine studies contained participants with an intermediate practice level whereas one study mixed novice, intermediate, and elite levels in the same study.
Mental training techniques
A wide variety of techniques were used in the mental training programs of the studies reviewed including relaxation, imagery, observation (i.e. video observation of athletes), goal setting, arousal regulation, mental quickness training, self-talk, competitive and pre-competitive routines, perceptual-cognitive training (i.e. which aim to perceive and understand moving patterns), feedback (i.e. targeted feedbacks from the coach), or communication. The mental training programs were always directly applied to the athletes. The most used techniques were imagery (41% of the studies), relaxation (15%), goal setting (15%), competitive and pre-competitive routines (12%).
Design of the intervention
Most of the studies used a pre-test-post-test design with an intervention and measures of the variables before and after the sessions. Three studies have used a qualitative approach with case studies (Mathers, 2017;Ramirez et al., 2010;Seang-Leol & Calderon, 2018) whereas the other research studies have based their protocol on quantitative statistical analyses. The duration of the interventions of the reviewed studies ranged between one short session and three years. Two studies used a short program including only one session, five studies used between two and five sessions, and five studies used between six and ten sessions. In contrast, one study proposed more than 50 training sessions (Mathers, 2017), nine studies mentioned between 11 and 20 sessions, and five studies between 21 and 50 sessions. The timing of the intervention was very fluctuant and ranged between 15 and 90 minutes per session and between one and three sessions for a week.
Other mental training program aimed to improve the mental competitive skills (n = 14) such as mental toughness (Mathers, 2017;Morais & Rui Gomes, 2019), self-confidence (Daw & Burton, 1994;Mamassis & Doganis, 2004;O et al., 2014;Seang-Leol & Calderon, 2018) or motivation (Ramirez et al., 2010;Vidic & Burton, 2010). The mental skills were measured using psychometric self-report questionnaires, interviews, or observations of the athletes during training or competition. Results revealed an improvement of mental skills. The improvement of mental skills was sometimes combined (n = 10) and sometimes not combined (n = 4) with a performance measure. Finally, specific studies aimed to improve the working memory, perceptual skills, and anticipation skills.
Discussion
To the best of our knowledge, there is no review investigating the studies testing the effects of a mental training program in racket sports. Considering the relevance of the mental aspect of the performance in racket sports, the main objectives of this study were: (a) to collect the studies that incorporate mental training programs used in racket sports, (b) to organize the current knowledge on mental training programs and provide a synthesis of the characteristics of these studies, and (c) to identify gaps in the literature on this topic and propose potential further investigations and practical implications.
General findings
Twenty-seven studies since 1980 were selected for the present review. In an applied perspective, we can regret a limited involvement regarding the mental training programs. In contrast, a considerable amount of literature regarding the mental skills required in competitive racket sports have been developed in the scientific literature (e.g. Bastug et al., 2017;Kwon et al., 2010;Sharma, 2015). The lack of studies with mental training programs could be a consequence of the persistent weak interest of several sport stakeholders for the mental practice (Connaughton, Wadey, Hanton, & Jones, 2008). Moreover, the reduced number of scientific studies reviewed in this study could highlight an image of mental training as a less rigorous process than others such as physical training (Jones, 1995).
Inspection of the level of participants revealed 11 studies with elite athletes, nine with an intermediate level, and seven with a novice population. This distribution provides evidence of the elite aspect of mental training for a majority of the sports protagonists (Jones, 1995). This point of view is consistent with the high physical, psychological, and social demands of elite sport and fits with the research of detail of the training for elite athletes in racket sports (Doherty et al., 2018). However, every level of sport experience and practice level could benefit from the effects of mental training programs. The improvement of performance and mental skills among novice participants has provided evidence for the possibility and the interest to include low competitive levels in mental training programs (e.g. Dana & Gozalzadeh, 2017;Ducrocq et al., 2017). The gender distribution showed a total of 344 males and 112 females mentioned in the reviewed studies. This result highlights a gender imbalance and a male predominance in the mental training programs and very few of the studies reviewed have explored the gender effect on the consequences of the mental training programs (Caserta et al., 2007;Singer et al., 1994).
The studies reviewed were exclusively conducted on the four major racket sports (tennis, table-tennis, badminton, and squash) with a large majority on tennis (n = 19). Badminton and table-tennis were moderately represented, and squash was weakly represented. This distribution is in line with the respective popularity of racket sports considering the mediatise and economic importance of tennis in comparison with other racket sports (Lees, 2003). No study focused on two or more racket sports simultaneously. This kind of study with a comparison between two activities could provide knowledge about the similarities and differences across the racket sports. Moreover, the lack of investigation of some racket sports could restrain the appropriation of mental training programs in such activities.
Design of the studies
A majority of studies used a quantitative approach and adopted the traditional pre-test-post-test paradigm with a control group. The quantitative methods facilitated the statistical analyses and allowed to provide a rigorous examination of the effects of mental training programs (Biddle, Markland, Gilbourne, Chatzisarantis, & Sparkes, 2001). Thus, the bulk of quantitative studies focusing on the interventions consolidated the scientific legitimacy of the tested mental training programs by providing evidence of their significant effects on performance scores and/or on psychological outcomes (e.g., anxiety scores). In addition, few case studies were also reviewed (Mathers, 2017;Ramirez et al., 2010;Seang-Leol & Calderon, 2018). These case studies have furthered knowledge base regarding the mental processes of athletes during training and competitions (Biddle et al., 2001). For instance, Mathers (2017) has recently proposed an individualised program in which athletes were subject to successive mental interventions during a three-year period.
The mental training programs were heterogeneous as indicated by the large variety of the number of training sessions and/or the duration of the mental training programs. Indeed, several studies have proposed duration of intervention ranging from 30 minutes to three years with a majority of programs comprising between 2 and 20 sessions. A lot of studies used repeated measures before and after the interventions but very few have adopted a longitudinal approach to assess the ongoing variability of relevant psychological outcomes during the mental training programs. As such, the use of longitudinal studies continuously tracking the ongoing psychological processes involved in mental training programs could further knowledge about the overtime effects of mental training programs.
Outcomes
All of the mental training programs reviewed have reported positive outcomes. These positive results should encourage the coaches, athletes, and sports psychologists to intervene in racket sports to set up mental training programs suitable for the targeted outcomes. The main objective of the studies was the improvement of the players' performance (Gonzalez-Garcia et al., 2017;Mamassis & Doganis, 2004;Mathers, 2017;Morais & Rui Gomes, 2019;Seang-Leol & Calderon, 2018;Vidic & Burton, 2010) or the quality of their strokes. Various studies have revealed a significant improvement of the velocity, the accuracy, the efficiency, and the regularity of service (Atienza et al., 1998;Coelho et al., 2007;Guillot et al., 2013;Jeon et al., 2014;Noel, 1980), service returning (Coelho et al., 2007;Robin et al., 2007), backhand and forehand strokes (Caliari, 2008;Daw & Burton, 1994;Guillot et al., 2015;Li-Wei et al., 1992;Tzetzis et al., 2008). The open skills (e.g. service returning, decision making) have been less investigated than the closed skills (e.g., service) probably because of the difficulties to assess these factors of performance (Currell & Jeukendrup, 2008). However, open skills represent a crucial aspect of racket sports and could be a potential extension for further mental training programs (Coelho et al., 2007).
The programs focused on mental skills also reported positive outcomes. These programs increased the levels of mental toughness (Mathers, 2017;Morais & Rui Gomes, 2019), self-determined motivation (Ramirez et al., 2010;Vidic & Burton, 2010), emotional control (Dohme et al., 2019) and self-confidence (Daw & Burton, 1994;Mamassis & Doganis, 2004;O et al., 2014;Seang-Leol & Calderon, 2018) and decreased the athletes' anxiety and anger scores (Mamassis & Doganis, 2004;Steffgen, 2017). The present results suggest the potential benefits of mental training on various key mental skills for racket sports (Jones, 1995;Lees, 2003). The well-being indicators were less explored in the studies with only a few studies including some well-being indicators (e.g. pleasant and unpleasant emotion). This lower preoccupation could be explained by the general focus on performance in competitive sport. However, the association between performance and well-being has been highlighted in previous racket sports studies (Martinent et al., 2018) and could be an area of improvement for sport stakeholders.
Moreover, several specificities of racket sports such as awareness or emotional control have not been explored. For example, despite the identification of keys mental skills in racket sport (Jones, 1995;Lees, 2003;Mamassis & Doganis, 2004), no study proposes a training program explicitly focused on the emotional demands of racket sports. For example, the development of emotional intelligence seems suitable for racket sports. Indeed, the ability to identify, understand, regulate, and use one's and others' emotions could represent an essential skill for coping with the emotional demands of racket sports (Laborde et al., 2014;.
Techniques
The distribution of the techniques indicated a wide variety of mental training methods used in the explored studies. It is also noteworthy that a particular technique could be used exclusively or combined with other techniques. Among the variety of detailed techniques within the examined studies, imagery emerged as the most used. The results provided evidence of the positive effect of imagery on sport performance indicators related to the strokes realised in racket sports. Imagery programs have permitted to improve the velocity, accuracy, efficiency, and the regularity of serves, returns, backhands, and forehands (Atienza et al., 1998;Caliari, 2008;Dana & Gozalzadeh, 2017;Guillot et al., 2015). Consequently, the imagery programs appeared suitable for the development of racket sports motor skills. Additionally, relaxation techniques were used in five studies, regularly in combination with other techniques such as imagery (Lejeune et al., 1994;Li-Wei et al., 1992;Mamassis & Doganis, 2004). Overall, relaxation techniques have also led to an improvement in players' strokes and performance. Goal-setting learning was proposed in five studies especially within targeted interventions grounded within cognitive behaviour therapies (Daw & Burton, 1994;Mamassis & Doganis, 2004;Mathers, 2017;Seang-Leol & Calderon, 2018;Vidic & Burton, 2010). The goalsetting techniques have permitted improving salient mental skills in racket sport such as self-confidence (Daw & Burton, 1994;Mamassis & Doganis, 2004) or self-determined motivation (Vidic & Burton, 2010). Similarly, the studies that have included competitive and pre-competitive routines techniques have led to an increase of self-confidence scores (Mamassis & Doganis, 2004) In sum, imagery, relaxation, goal setting, and routines seem fitting with racket sports demands and facilitate performance and the fostering of salient mental skills (self-determined motivation, emotional regulation). Moreover, the results highlighted an association between the technique used during mental training and the mental skills targeted to be improved. In an applied perspective, the present results suggested adapting the techniques of mental programs to the specific objectives and issues encountered by the athletes. Several techniques were less or even almost not used in the studies reviewed. For instance, arousal regulation or self-talk techniques have been proposed in a very limited number of researches (Daw & Burton, 1994;Dohme et al., 2019;Mamassis & Doganis, 2004). Otherwise, all the techniques used in the reviewed studies have been almost exclusively implemented on the athletes. Very few studies focused on the salient stakeholders grounded with the athletes' environment (e.g., parents, coaches). However, previous studies have highlighted the importance of the athletes' environment in racket sports and have suggested potential techniques to help parents and coaches (Gould et al., 2008;Harwood & Knight, 2009;Kwon et al., 2010;Riemer & Chelladurai, 1998;Sharma, 2015).
Conclusions
The present review aimed to explore the studies which include a mental training program in racket sports. The 27 studies selected in the present review comprised various samples from different racket sports and were characterized by distinct study designs, mental training techniques, and outcomes. The various techniques used in the programs led to positive outcomes such as improvement of performance and mental skills. However, the results of the review highlighted the unequal distribution of the population (e.g. male and expert domination) and the sport (tennis attracting the focus of attention) in the studies. Moreover, the present results suggested several lacks in the targeted outcomes (e.g. lack of focus on well-being indices), and/or in the techniques (e.g. self-talk or relaxation, absence of programs applied on coaches or parents) given the specific constraints of racket sports. In summary, this review suggested potential implications for both researchers and practitioners. The results encouraged further investigations of mental training programs to address the aforementioned unexplored issues. Finally, we hope that this review will promote the development of mental training programs in racket sports and will help several sport stakeholders (coaches, sport psychologists, athletes) to adapt the mental training to the practice constraints and the objectives. | 2020-10-30T09:01:53.259Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "c4a695d043cdb7510ea0bf8f92472011c526fd70",
"oa_license": null,
"oa_url": "https://digibug.ugr.es/bitstream/10481/63721/1/24-Article%20Text-159-1-10-20200803.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "211d6144f46c4310bedf368c9b039d59852b0eb5",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
254125596 | pes2o/s2orc | v3-fos-license | Structural and dynamical modeling of WINGS clusters. III. The pseudo phase-space density profile
Numerical simulations indicate that cosmological halos display power-law radial profiles of pseudo phase-space density (PPSD), Q=rho/sigma^3, where rho is mass density and sigma velocity dispersion. We test these predictions using the parameters derived from the Markov Chain Monte Carlo (MCMC) analysis performed with the MAMPOSSt code on the observed kinematics of a velocity dispersion based stack (sigmav) of 54 nearby regular clusters of galaxies from the WINGS dataset. In the definition of PPSD, the density is either in total mass rho (Q_rho) or in galaxy number density nu (Q_nu) of three morphological classes of galaxies (ellipticals, lenticulars, and spirals), while the velocity dispersion (obtained by inversion of the Jeans equation) is either the total (Q_rho and Q_nu) or its radial component (Q_r,rho and Q_r,nu). We find that the PPSD profiles are power-law relations for nearly all MCMC parameters. The logarithmic slopes of our observed Q_rho(r) and Q_r,rho(r) for ellipticals and spirals are in excellent agreement with the predictions for particles in simulations, but slightly shallower for S0s. For Q_nu(r) and Q_r,nu(r), only the ellipticals have a PPSD slope matching that of particles in simulations, while the slope for spirals is much shallower, similar to that of subhalos. But for cluster stacks based on richness or gas temperature, the fraction of power-law PPSDs is lower (esp. Q_nu) and the Q_rho slopes are shallower, except for S0s. The observed PPSD profiles, defined using rho rather than nu, appear to be a fundamental property of galaxy clusters. They would be imprinted during an early phase of violent relaxation for dark matter and ellipticals, and later for spirals as they move towards dynamical equilibrium in the cluster gravitational potential, while S0s are either intermediate (richness and temperature-based stacks) or a mixed class (sigmav stack).
Introduction
Cosmological dissipationless simulations have led to the building blocks of the standard model of dark matter, in particular through the establishment of the universality of cosmic structure (halo) density profiles, well characterized by the NFW (Navarro, Frenk, & White 1996) and Einasto models (Navarro et al. 2004). 1 Further insight into the structure of clusters of galaxies have come from the analysis of Taylor & Navarro (2001), who examined the coarse-grained phase-space density profiles of cold dark matter (DM) halos from cosmological simulations. They defined the pseudo-phase space density (PPSD) profile Q(r) ≡ ρ(r)/σ(r) 3 , where ρ(r) and σ(r) are the radial profiles of total mass density and velocity dispersion, respectively. They found Q(r) to follow a power law Q(r) ∝ r α with α ≈ −1.875 over two and a half decades in radius. The equivalent PPSD built with the radial component of the velocity dispersion σ r , i.e. Q r (r) ≡ ρ(r)/σ r (r) 3 , is also found to obey a power-law relation with radial coordinate r, with a slightly steeper slope than Q(r) (Rasia, Tormen, & Moscardini 2004;Dehnen & McLaughlin 2005). 2 These power-law behaviors are remarkable given that the logarithmic density profile log ρ(r) and the logarithmic ve-locity dispersion profiles log σ(r) (total) and log σ r (r) (radial) are all convex functions of log r. The slope of Q(r) matches the slope of −15/8 expected from secondary infall models based on (quasi)-power law density profile ρ ∼ r −9/4 (Gott 1975;Gunn 1977;Bertschinger 1985), despite the fact that DM halos in the cosmological simulations of Taylor & Navarro (2001) assemble in a different way from the regular phase-space stratification process described by Gott, Gunn, and Bertschinger. Much effort has been employed in trying to understand why Q(r) is a simple power-law, and why its slope is so close to the value predicted by the secondary infall model of Bertschinger (1985). Analytical and numerical studies have shown that the final shape of Q(r) does not depend on whether the halo evolves through major mergers or spherical infall (Manrique et al. 2003;Ascasibar et al. 2004;Austin et al. 2005;Hoffman et al. 2007). The final Q(r) configuration of cosmological halos appears to be reached very early on, soon or immediately after the early relaxation phase (Vass et al. 2009;Colombi 2021), driven by violent relaxation (Lynden-Bell 1967). Other collective relaxation processes might be important as well in shaping Q(r), such as Landau damping and radial orbit instability (Henriksen 2006;MacMillan, Widrow, & Henriksen 2006). At large radii, where relaxation is still incomplete, deviation from the power-law behavior was expected theoretically (Bertschinger 1985;Lapi & Cavaliere 2011), and also detected in numerical simulations. However, deviations from the power-law behavior never exceed Article number, page 1 of 16 arXiv:2212.00600v1 [astro-ph.CO] 1 Dec 2022 A&A proofs: manuscript no. wings3-Q 20% within the halo virial radius (Ascasibar & Gottlöber 2008;Navarro et al. 2010;Ludlow et al. 2010;Marini et al. 2021). Close to the halo center, where baryonic effects can be important, both a steepening (Lapi & Cavaliere 2011) and a flattening (Butsky et al. 2016) of Q(r) have been predicted. Vass et al. (2009) suggested that the original physical association of Q(r) with the halo coarse phase-space density is not justified, as the two quantities have different behaviors (but see Ma & He 2009). Interpretation of Q(r) in terms of the power −3/2 of the dynamical entropy of a gravitating system might prove more useful to understand its origin. Faltenbacher et al. (2007) noted the similarity in the so-defined dynamical entropy of DM particles and the thermodynamic entropy of the intra-cluster gas, outside the core of simulated clusters. He & Kang (2010) argued that Q(r) corresponds to a minimum entropy state, that is the end result of long-range (e.g. violent) relaxation processes in gravitating systems, while the state of maximum entropy is only reached locally, where short-range (e.g. two-body) relaxations dominate.
Almost all numerical investigations so far have been focused on the Q(r) traced by DM particles, and similar results for Q(r) have been obtained in DM-only and in hydrodynamical simulations (compare, e.g., Dehnen & McLaughlin 2005;Rasia et al. 2004). Only recently, different tracers have been considered in the definition of Q(r) in the study of Marini et al. (2021), and the Q(r) slope has been found to be strongly dependent on the chosen tracer, being steeper for stars and shallower for galaxies (subhalos in hydrodynamical simulations), than for DM particles (α = −3, −1.3, and −1.8, respectively). This dependence is very relevant when comparing simulation-based predictions with observations, since Q(r) is not an observable; ρ(r) can be inferred from stellar and galaxy kinematics, from gravitational lensing, or from the temperature and pressure of the intra-cluster gas (see, e.g. Pratt et al. 2019, for a review), but σ(r) can only be determined for the tracer of the gravitational potential. Since only the tracer σ(r) can be determined observationally, for consistency some authors have used the number density profile of the tracers, ν(r), rather than the total mass density profile ρ(r), in the definition of Q(r).
Several observational studies have confirmed the expected simulation-based power-law behavior of Q(r), both for galaxies and for clusters of galaxies. Chae (2014) found that Q(r) is a power law for Coma cluster elliptical galaxies with a slope of 1.93 ± 0.06. On larger scales, Q(r) has been measured in several clusters of galaxies over the redshift range 0.06-1.34, and always found to be similar to, or at least consistent with, the simulationbased expectations, both when Q(r) was defined using the total mass density profile ρ(r) Munari, Biviano, & Mamon 2014;Biviano et al. 2016Biviano et al. , 2021, and when the tracer ν(r) was used instead, for tracers of different colors and luminosities (Munari, Biviano, & Mamon 2015;Aguerri et al. 2017;Capasso et al. 2019).
Despite the good agreement of the simulation-based predictions with observations, the power-law behavior, and even the physical reality of Q(r), have been questioned. Nadler, Oh, & Ji (2017) argued against a power-law behavior of Q(r) at any radial scale, and argued that the agreement between Q(r) found in numerical simulations and the solution of the secondary infall model of Bertschinger (1985) is purely coincidental. According to Schmidt et al. (2008) different halos follow ρ/σ r ∝ r α relations, with different best-fit values of and α, that is, = 3 is not a universal value. Arora & Williams (2020) argued that the power-law behavior of Q(r) does not have a physical origin, but it is just a fluke, a consequence of the linear relation between the logarithmic slope of the mass density profile, γ ≡ d ln ρ/d ln r, and the velocity anisotropy profile β = 1 − σ 2 θ /σ 2 r , where σ θ and σ r are the tangential and radial component of the velocity dispersion tensor. The linear β − γ relation was discovered by Hansen & Moore (2006) and Hansen & Stadel (2006) in a variety of simulated gravitating systems, issued from controlled simulations of halo-halo collisions and radial infall, as well as from cosmological simulations. However, the physical origin of the linear β − γ relation is not better elucidated than that of the Q(r) power-law, and the relation is not clearly established in real clusters Munari et al. 2014;Aguerri et al. 2017;Biviano et al. 2021).
Lacking a clear understanding of the physical origin(s) of either Q(r) ∝ r α or the linear β − γ relation, several studies have tried to investigate their consistency in the context of the dynamical equilibrium of a spherical gravitating system, as described by the Jeans equation, which for a spherical stationary system is (Binney 1980 By assuming a linear β − γ relation, Dehnen & McLaughlin (2005) found a critical solution that satisfies ρ/σ r ∝ r α , with the value of α being dependent only on and β 0 , and independent of the slope of the β − γ relation. In particular, = 3 and β 0 = 0 lead to α = 1.94, essentially the same value found in numerical simulations. Barnes et al. (2007) considered density profiles of the NFW or Einasto form, but could not find consistent solutions of the Jeans equation with a power-law Q(r) and a linear β − γ relation similar to the relations found in simulations. They suggested that the β − γ relation for any single halo is not strictly linear, and that the β − γ relation is not just a manifestation of a scale-free Q(r). Zait, Hoffman, & Shlosman (2008) started from the power-law behavior of Q(r) to show that a linear β − γ relation is inconsistent with generalized NFW density profiles (Zhao 1996), but consistent with Einasto profiles of index n = 6 (see, e.g., eq.(16) of Mamon et al. 2019, Paper II hereafter). The behavior of Q(r) should depend on the choice of tracer used to measure σ(r) and σ r (r), and possibly the density profile, when the number density profile, ν(r), is used in the definition of Q(r). But the influence of tracer choice on Q(r) has not yet been addressed.
In this article, we investigate Q(r) and Q r (r) in 54 nearby clusters of galaxies (0.04 < z < 0.07) from the WINGS data set (Fasano et al. 2006), which Cava et al. (2017, hereafter Paper I) found to be regular systems. In a forthcoming article (Mamon & Biviano, in prep.), we will investigate the β − γ relation in a similar fashion. We exploit the results of the kinematic analysis of Paper II that determined the mass density and velocity anisotropy profiles, ρ(r) and β(r), of stack samples of these clusters, as well as the number density profiles for each of three morphological classes of galaxies, from Gaussian priors obtained from previous fits in Paper I of model plus constant background of the photometric data for the same stacked clusters. For the first time, we determine Q(r) and Q r (r) separately for three different morphological classes of cluster galaxies.
In the rest of this paper we use Q and Q r to refer generically to the pseudo-phase-space density profiles without distinction to whether they have been derived using ρ(r) or ν(r). When needed, we will use subscripts to distinguish the different profiles, Q ρ , Q r,ρ and Q ν , Q r,ν .
The structure of this paper is the following. In Sect. 2 we describe our data set, in Sect. 3 our method of analysis. In Sect. 4 we present our results. We discuss our results in Sect. 5. Sect. 6 provides a summary and our conclusions.
The data set
Our analysis is based on the WINGS data set, which contains X-ray-selected clusters at 0.04 < z < 0.07 (Fasano et al. 2006) with spectroscopic coverage for cluster galaxies with a median stellar mass of log(M /M ) = 10.0 for ellipticals (E) and 10.4 for spirals (S) (Cava et al. 2009;Moretti et al. 2014;Paper II). Morphological types were derived by Fasano et al. (2006) using the MORPHOT tool. In Paper I, we defined three intervals in the MORPHOT classification parameter corresponding to the three morphological classes of ellipticals, lenticulars (S0), and spirals.
In Paper I, we identified cluster members using the Clean algorithm (Mamon, Biviano, & Boué 2013) and selected a subsample of 68 clusters with at least 30 spectroscopic members. Using the substructure test of Dressler & Shectman (1988), we identified 54 regular and 14 irregular clusters. We then estimated r 200 and v 200 of these 68 clusters in three different ways: based on (i) the cluster velocity dispersion (sigv), (ii) an estimate of the cluster richness (num, Mamon et al. in prep., see Old et al. 2014), and (iii) the cluster X-ray temperature (tempX, only available for 38 of these clusters). Using these three estimates of r 200 , v 200 in Paper I, we then stacked the 54 (38, in the case of tempX scaling) regular clusters into three pseudo-clusters, by rescaling the projected radii and rest-frame velocities of cluster galaxies by their cluster r 200 and v 200 , respectively. These three pseudo-clusters formed the data set for the kinematic modeling that we performed in Paper II, using the MAMPOSSt algorithm of Mamon, Biviano, & Boué (2013). Irregular clusters were not considered because MAMPOSSt is based on the Jeans equation, which being derived from the collisionless Boltzmann equation, assumes that the tracers are test particles orbiting the gravitational potential and not interacting with one another in contrast to galaxies within a substructure of an irregular cluster.
MAMPOSSt fits parametric models to the distributions of galaxies in projected phase space (projected distance to the center and line of sight velocity). The parameters are those describing the total mass density profile, ρ(r), the tracer density profiles for the three morphological types (i), ν i (r), and the velocity anisotropy profiles for the three types, β i (r). MAMPOSSt speeds up the calculations by a large factor by assuming that the three spherical-coordinate components of the local velocity distribution function are Gaussian. The recovered radial profiles of mass density and velocity anisotropy are as good with MAMPOSSt as with other methods (Read et al. 2021), even though MAMPOSSt is much faster.
Here we use the results of the kinematic modeling of Paper II. More specifically, we consider the M(r) and β(r) model parameters of the outputs (chain elements) of the Markov Chain Monte Carlo (MCMC) investigation of parameter space used by MAMPOSSt, using CosmoMC (Lewis et al. 2002), based on the Metropolis-Hastings algorithm. This allows us to determine Q(r) and Q r (r) at several values of r, and for the three different morphological classes, as detailed in Sect. 3. For Q ν and Q r,ν , we also used the tracer number density profiles, ν i (r), for each morphological class, obtained from fitting NFW models plus constant background to the photometric data and then refined in the MCMC analysis with MAMPOSSt.
For each model, MAMPOSSt was run using 6 MCMC parallel chains, each with over 10 5 elements, for a total of over 500 000 chain elements (i.e. points in parameter space) per model. We discard the 20% of the first elements of each chain of each model, which corresponds to the 'burn-in' phase where the MCMC has not yet settled to its equilibrium and estimate Q(r) and Q r (r) for all remaining chains.
We consider all three stacks obtained using the three scalings, sigv, num, tempX. We present the results for the sigv scaling in the main text and discuss them in Sect. 5.1. Results for the num and tempX scalings are presented in Appendix A and discussed in Sect. 5.2 in comparison with the results obtained for the sigv scaling.
Analysis
The parameter values in each MCMC are used to directly derive the radial profile ν(r), ρ(r), and β(r). To determine Q(r) and Q r (r), we also derive σ(r) and σ r (r) via: (van der Marel 1994; Mamon & Łokas 2005) and where M(r) is the total mass profile. Paper II considered 30 sets of priors according to chosen models for the mass density profile: with mass density logarithmic slope where x = r/r s , while γ 0 and γ ∞ are the logarithmic slopes of the density profile at r = 0 and at infinity, respectively. The models considered in Paper II nearly all assumed γ ∞ = −3, γ 0 = −1 for NFW and γ 0 −1 for generalized NFW ('gNFW'), scale radius r s related to the radius where γ = −2: r −2 = (2 + γ 0 ) r s . We had also adopted Einasto (1965) mass models, which fit even better the distribution of radii in halos in ΛCDM dissipationless cosmological simulations (Navarro et al. 2004), yielding The mass density models are normalized by the mass at radius r 200 = c 200 r −2 where the mean mass density is equal to 200 times the critical density of the Universe at z = 0.055, the median redshift of the WINGS clusters.
Article number, page 3 of 16 A&A proofs: manuscript no. wings3-Q The anisotropy models considered in Paper II followed where β 0 and β ∞ are the values of β at r = 0 and at infinity, respectively, r β is the anisotropy radius where β is midway beween β 0 and β ∞ , and δ is the anisotropy sharpness, with δ = 1 for Tiret et al. (2007) anisotropy and δ = 2 for the generalized Osipkov-Merritt ('gOM') anisotropy (Osipkov 1979;Merritt 1985). For δ = 1 or 2, the exponential term in Eq. (2) is (Appendix B of Mamon et al. 2013 for these anisotropy models and a few others) The anisotropy radius was either a free parameter or fixed to the scale radius of the given morphology, r ν , previously fitted to the photometric data in Paper I, using a projected NFW model plus a constant field surface density. In Paper II, we found that the elliptical galaxy distribution traces the mass very well, the S0 distribution traces it reasonably well, while the spiral galaxy distribution traces it very poorly. In other words, r ν,E r −2 , while r ν,S 4 r −2 .
In the present paper, among the 30 models of Paper II, we only considered single-component mass models with free inner and outer anisotropy for all three morphological types. We also excluded the models with Tiret anisotropy with anisotropy radius fixed to r β = r ν , which lead to linear β − γ relations if tracer follows mass (Mamon & Biviano, in prep.). This left us with models 6, 7, 12, and 15 in Table 2 of Paper II. Our results are therefore independent from the linear β − γ relation assumption that according to some authors could explain the power-law behavior of Q(r) and Q r (r) (Dehnen & McLaughlin 2005; Arora & Williams 2020).
All four considered Paper II models assume NFW ν(r), with a scale radius r ν as a free parameter, one r ν parameter for each morphological class. In models 6 and 7, ρ(r) is modelled by the gNFW profile, with γ ∞ = −3 and γ 0 as a free parameter (Eq. [4]). In models 12 and 15, ρ(r) is instead modelled by the NFW profile. In all four models r 200 , and therefore M 200 , is a free parameter of ρ(r), while c 200 is related to M 200 through the relation of Dutton & Macciò (2014): with a Gaussian prior σ(log c 200 ) = 0.1. 3 Therefore the mass density profile involves 2 (NFW and n=6 Einasto) or 3 (gNFW) free parameters. Models 7 and 12 adopt the Tiret model for β(r), while models 6 and 15 adopt the gOM anisotropy model. Both the Tiret and the gOM models are characterized by two free parameters per each morphological class, the inner and outer velocity anisotropies β 0 and β ∞ . The anisotropy scale radius r β is a free parameter in Tiret models 7 and 12, whereas it is tied to the tracer scale radius, r β = r ν in gOM models 6 and 15. Thus, the anisotropy profile involves 2 (fixed r β ) to 3 (free r β ) parameters per morphological type, hence 6 (fixed r β ) to 9 (free r β ) free parameters after summing over the three morphological types.
In addition to the four models described above we consider the three following models. Model 7c is the same as model 7 but 3 The logarithms are all in base 10. 10) and (11), respectively.
with c 200 as a fully free parameter (with a uniform prior for log c from 0 to 1). Model 12e and 15e are the same as, respectively, model 12 and 15, but with a n = 6 Einasto ρ(r) in lieu of NFW. The properties of these seven models are summarized in Table 1.
For each MCMC chain element, we determine Q(r) and Q r (r) at six logarithmically spaced radii, from r/r −2 = 0.125 to 4, in steps of a factor 2: r/r −2 = 2 i−4 , i = 1, . . . , 6, that is from roughly 0.03 to 1 virial radius. We fit straight lines to the six values of log Q vs. log r, log Q r vs. log r, for each individual MCMC, yielding log Q(r) = a + b log(r/r −2 ). We measure the linearity of Q(r) and Q r (r) using the quantity l = 1 − D/L, where L is the length of the fitted line, where x i = log(r i /r −2 ), and D is the orthogonal deviation of the six measurements from the fitted line, where y i = log[Q(r)/Q(r −2 )], or its analog for Q r instead of Q. We arbitrarily set a limit l = 0.9 above which the relation is considered to be linear, that is the points deviate on average from the fitted line by less than 10% of the line length. We show examples of linear and non-linear relations in Fig. 1.
Linearity of log Q vs. log r
We first consider whether the Q(r) and Q r (r) profiles are linear in logarithmic space. Fig. 2 shows the fraction f l of MCMC chain elements that have l ≥ 0.9 (see Sect. 3) with the f l values listed in Table 2. Independently of the chosen model and galaxy type, all profiles are linear for over 95% of the MCMC chains for Q ρ and Notes. The model number is the same as in table 2 of Paper II. Letters following the model numbers indicate slight modifications to the models; we use 'c' to indicate that the halo concentration c 200 is a fully free parameter, and 'e' that the Einasto total density model ρ(r) model is adopted. The rows are (1) (9), the inner (10) and outer (11) velocity anisotropies, and the anisotropy radius (12), where r β = r ν means that we fixed the anisotropy radius to the tracer scale radius. Table 1) for the sigv scaling, using total mass density profile (left) and tracer number density profiles (right), with total velocity dispersion (top) and radial velocity dispersion (bottom). Error bars are smaller than the symbol sizes.
Q r,ρ . The f l values of the Q ρ and Q r,ρ profiles are almost identical. There is no clear dependence of f l on either the ρ(r) or the β(r) model chosen. Recall that we did not consider the models of Paper II that lead to linear β − γ relations to avoid biasing the linearity of the PPSD, since the PPSD and β − γ relations may be physically related.
The Q ν and Q r,ν profiles are also linear for over 90% of the MCMC chain elements, independently of the chosen model, but only when either ellipticals or spirals are considered. When considering S0, the f l values for the Q ν and Q r,ν profiles can be as low as 0.80. Models with gNFW ρ(r) have lower values of f l when considering S0. The f l values of the Q ν and Q r,ν profiles are very similar.
Combining all three morphological classes, the linear fractions for Q ρ and Q r,ρ are maximal for model 15e (n = 6 Einasto with gOM anisotropy). Similarly, for Q ν and Q r,ν , the linear fractions are maximal for model 12 (NFW with Tiret anisotropy).
Slopes
We then fit straight lines to log Q and log Q r vs. log(r/r −2 ), for the MCMC chain elements for which l ≥ 0.9 (non linear profiles are not considered as the straight line slope is not a useful statistic for them). We show the distributions of the best-fit logarithmic slopes of Q(r) in Fig. 3 (left panel: Q ρ , right panel: Q ν ) and of Q r (r) in Fig. 4. The slope distributions do not differ in a significant way from one model to another and have similar unimodal shapes for all profiles.
shown in Figs. 3 and 4, compared with the simulation-based values. We also provide the average and dispersion of the logarithmic slopes of the linear Q, Q r profiles for all models and all galaxy types in Table 2. Our results do not depend in a significant way on the assumed model for ρ(r) and β(r). In fact, the average logarithmic slopes of the Q and Q r profiles for a given galaxy type are very similar across different models, and the dispersions of the average slope values of the seven models is much smaller than the dispersion in the values of the slopes obtained from the MCMC of any individual model (see rows labelled 'mean' in Table 2).
Both for ellipticals and spirals, and also marginally for S0s, the logarithmic slopes of the linear Q ρ and Q r,ρ profiles are consistent with the simulation-based prediction for DM tracers for all models (Fig. 5). The Q ν and Q r,ν profile slopes for ellipticals are consistent with those of simulations based on DM tracers, while the corresponding slopes for spirals are not. The Q ν slopes for S0s are also inconsistent with the simulation-based predictions based on DM tracers for all models, while the Q r,ν profile slopes for S0s are marginally consistent with the same simulation predictions (thanks to larger dispersions). Interestingly, the spiral Q ν profile slopes are in agreement with the simulationbased prediction for subhalos, while the elliptical and S0 Q ν profiles are not.
Discussion
We discuss in turn our results on sigv stacks and on the other two stacks (num and tempX).
Discussion of results on sigv stacked clusters
Most Q ρ and Q r,ρ profiles are very close to power-law relations: 96% of all models and galaxy types show PPSDs with linearity l > 0.9 (see Fig. 2, left panels, and Table 2). The large majority of MCMC chain elements predict power-law Q ρ (r) and Q r,ρ (r) with average slopes in very good agreement and fully consistent with the simulation-based expectations using DM particles as tracers, but slightly flatter for S0s than for ellipticals and spirals (see Fig. 5, top-left panel). Our results support the findings of several studies based on both DM-only and hydrodynamical simulations (Taylor & Navarro 2001;Rasia et al. 2004;Dehnen & McLaughlin 2005;Ludlow et al. 2010;Navarro et al. 2010), and of previous observational studies Munari et al. 2014;Biviano et al. 2016Biviano et al. , 2021, and do not support claims against the power-law behavior of Q(r) (Nadler et al. 2017). Since our results are based on a stack cluster, we can neither confirm nor reject the numerical result of Schmidt et al. (2008) against the universality of Q(r) across different cosmological halos. However, for none of the three galaxy classes do the Q ρ (r) slopes agree with those obtained for subhalos in numerical simulations (Marini et al. 2021).
If both Q(r) and Q r (r) are power laws, of respective slopes α and α + ∆α, then their ratio R = Q r /Q = (3 − 2 β) 3/2 should also be a power law of slope ∆α. For the Tiret and gOM anisotropy models (Eq. [8]), one then expects where y = r/r β . Eq. (12) indicates that R varies from one constant value, R 0 = 3 − 2 β 0 , at small radii, to another constant value, R ∞ = 3 − 2 β ∞ , at large radii. Therefore, Q r /Q cannot be a power law over the full range of radii (unless β ∞ = β 0 ). If one Table 1), for sigv scaling. Left panels: Q ρ ; right panels: Q ν . Grey (respectively yellow) shadings indicate the simulationbased prediction for the slope of DM tracers (respectively subhalos), −1.84 ± 0.025 (respectively −1.29 ± 0.03). restricts the analysis to a narrow range of radii around r = r β , one expects a quasi-linear behavior obtained by a series expansion of R(y) in Eq. (12):
Fig. 5.
Average and dispersion of the Q profile logarithmic slope and the simulation-based predictions for DM tracers (grey shading), −1.84 ± 0.025 for Q(r) and −1.92 ± 0.05 for Q r (r), and for subhalos (yellow shading), −1.29 ± 0.03, for different morphological classes (indicated on the x axis), in different models (color coded as in Fig. 2 and Table 1). Only linear (l ≥ 0.9) profiles are considered.
Tiret gOM Fig. 6. Illustration of the non-linearity of Q r /Q in Tiret and gOM anisotropy models (Eq. [12] with δ = 1 and 2, respectively). Our analysis was limited to the radii in the shaded region.
The zeroth order term is positive since β < 1 by definition. The first order term is proportional to δ, and is negative for β ∞ > β 0 but positive otherwise. Hence, the transition of Q r /Q from R 0 at small radii to R ∞ at large radii is smoother for low δ anisotropy profiles. This is illustrated in Figure 6, which shows that the gOM model (δ = 2) is less linear than the Tiret (δ = 1) model. In turn, this would indicate that the fraction of linear models should be higher with Tiret anisotropy than for similar mass models with gOM anisotropy. However, in practice, the necessary nonlinearity of Q r /Q is not a worry, because the non-linearity range of Q r /Q is smaller than the non-linearity range of either Q(r) or Q r (r), because the logarithmic slopes of Q(r) and Q r (r) are similar (Table 2). For example, if Q(r) were perfectly linear (l = 1), then Q r would have a linearity l = 0.997 and 0.992 for the Tiret and gOM anisotropy models, respectively, hence much greater than our threshold of 0.9 for linear models. It might at first appear surprising that Q ρ and Q r,ρ should have similar slopes for the three morphological classes, given that the three classes have different line-of-sight velocity dispersion profiles (Paper I) and different β(r) (Paper II). The similarity of Q ρ and Q r,ρ for the three classes then imply that they also have similar σ(r) and σ r (r) and that the observed differences in their line-of-sight velocity dispersion profiles (Paper I) and β(r) (Paper II) is compensated by their different ν(r) (see Eqs. [2], [3], and Paper I).
One expects larger differences between the Q ν (r) profiles of different morphological classes, because Q ν is proportional to the number density of that class, and the number concentrations of the best-fit NFW models of each class differ significantly (Paper I). Indeed, the PPSDs of Q ν and Q r,ν are increasingly shallower when moving from ellipticals to S0s to spirals (bottom panels of Fig. 5), even if these profiles are also quite close to power-law relations, with f l 0.8 for all models and all galaxy types (see Fig. 2, right panels, and Table 2). At variance with Q ρ and Q r,ρ , only for ellipticals is there a good agreement between the observed Q ν and Q r,ν slopes and the expected values from simulations using DM particles as tracers (bottom panels of Fig. 5). This is not surprising, given that ν(r) ≈ ρ(r) for ellipticals, but not for the other two types (Paper II).
Interestingly, the logarithmic slope of Q ν (r) for spirals is very similar to the one found by Marini et al. (2021) for subhalos in cluster-size halos in cosmological hydrodynamical simulations (see bottom-left panel of Fig. 5). This similarity is probably related to the more extended radial distributions of spirals on one hand (Paper I) and of subhalos on the other (Marini et al.). Note that subhalos in dark matter only cosmological simulations of the same resolution show instead that the power-law dynamical entropy turns to flat inside half a virial radius.
The more extended subhalo number density profile, if not due to numerical effects (van den Bosch & Ogiya 2018), can be explained in several ways. Strong cluster tides at pericenter remove mass from infalling subhalos (Merritt 1983), as seen in simulations (e.g. Hayashi et al. 2003;Saro et al. 2006;Springel et al. 2008). Note again that the steeper dynamical entropy (hence steeper Q ν ) for the subhalos in hydrodynamical simulations relative to those in dark matter only ones suggests that the dissipative nature of gas leads to more concentrated subhalos that are more resilient to cluster tides. Such tides will remove mass from those subhalos that traverse the inner regions of clusters, causing (some of) the galaxies associated to them to fall below the data luminosity threshold. But tides affect all classes of galaxies, not just spirals. Alternatively, ram pressure stripping of the gas of spiral galaxies will strangle their subsequent star formation, leading to lower luminosities than gas-poor galaxies with the same orbits (Gunn & Gott 1972;Boselli et al. 2016). Another explanation may lie in temporal segregation instead of spatial segregation. If spiral galaxies are rapidly transformed into S0s and progressively into ellipticals (as argued, e.g. in Paper II), then S0s and ellipticals are the end products of galaxies that entered earlier in the cluster, most probably from lower apocenters. Thus the radial distribution of spirals is much more extended than S0s and ellipticals, leading to the shallower Q ν slope of spirals. However, spirals are not expected to be the dominant morphological class in simulated cluster subhalos. Therefore, the good agreement between the Q ν slopes of observed spirals and simulated subhalos remains an open question.
Our results for the Q ν and Q r,ν profiles agree with those obtained from analysis of observations of Capasso et al. (2019), who determined Q ν (r) for passive galaxies only, but not with Munari et al. (2015) and Aguerri et al. (2017), who found Q ν (r) to be consistent with the simulation-based expectations by Dehnen & McLaughlin (2005), for all classes of galaxies in two nearby clusters. Perhaps, thanks to our large data set, we are able to detect significant differences that were not visible in individual cluster analyses because of limited statistics.
When comparing Q ρ , Q r,ρ versus Q ν , Q r,ν , we should take into account that we forced the NFW model for ν(r), but allowed three different models for ρ(r) (see Table 1). However, our results are very insensitive to the choice of the ρ(r) model, and models 12 and 15, that use NFW for ρ(r), behave very similarly to all the others. Our analysis then suggests that the ρ-based definition of Q and Q r is more fundamental than that based on ν, even if, observationally, Q ρ and Q r,ρ are derived using inhomogeneous quantities, as ρ(r) refers to the distribution of total matter, dominated by DM, and σ, σ r to the velocity dispersion of galaxies.
To interpret our results, we note that recent numerical simulations (Colombi 2021) show that the power-law Q ρ and Q r,ρ profiles are established very early on for cluster DM, during the violent relaxation phase, possibly because of a tendency of the system towards a state of minimal entropy (He & Kang 2010). As galaxies enter the cluster gravitational potential well, their orbits and spatial distributions may evolve to reach the same state of dynamical entropy (∝ Q −2.3 ), leading to the same Q ρ and Q r,ρ as that of DM. Since the spatial distribution of ellipticals is similar to that of the total matter, we argue that the bulk motions of ellipticals experienced the same process of violent relaxation as the total matter, that is their progenitors (perhaps with different morphologies) were present at the time of cluster formation.
Violent relaxation at cluster formation cannot be the process shaping the PPSD profiles of S0s and spirals. Spirals have probably entered the cluster within the last ∼ 2 to 3 Gyr, after which they are morphologically transformed to S0s and/or ellipticals (e.g. Larson et al. 1980;Couch et al. 1998; see also Paper II), and quenched by the cluster environment (e.g. Poggianti et al. 2004;Haines et al. 2013), with indications that morphological transformation precedes star formation quenching (Sampaio et al. 2022). There is also observational evidence that S0s are not a pristine cluster population (Postman et al. 2005;Smith et al. 2005;Desai et al. 2007). The deviation of the S0s and spirals Q ν and Q r,ν profiles from simulation-based expectations for DM particles is probably an indication that their PPSD is achieved in a different way from ellipticals. S0s are an intermediate population between that of ellipticals and spirals, in terms of their PPSD. If S0s originate from spirals through some environmental process, such a process could also be responsible for the gradual PPSD evolution from that of spirals to that of ellipticals (Paper I). However, no such evolution is seen for the subhalo PPSD in cluster-sized halos from cosmological simulation (I. Marini, priv. comm.).
While the Q ν and Q r,ν profiles of S0s and spirals differ from the simulation-based expectations for DM particles, it is surprising that their Q ρ and Q r,ρ do not. Then, violent relaxation cannot be the only process conducive to the observed Q ρ and Q r,ρ power-law slopes. According to Dehnen & McLaughlin (2005) the dynamical process that leads to the Q r power-law behavior can be understood in terms of the Jeans equation of dynamical equilibrium by assuming that β is linearly related to γ. In their model, the logarithmic slope α r of Q r , must be related to the Table 1). The contour contains 68% of the MCMC chain elements for model 15e (navy blue). We omit the contours of the other models for the sake of clarity. The solid line is the relation α r = −35/18 + 2/9 β 0 from Dehnen & McLaughlin (2005).
In Fig. 7, we show the distributions of the MCMC chain elements in the α r −β 0 plane, separately for the three morphological classes. Ellipticals follow quite closely Dehnen & McLaughlin's relation (eq. [14], above), and so do spirals for most -but not all -models, while S0s do not. So the dynamical process that leads to the observed Q ρ and Q r,ρ power-law slopes, might indeed be the one suggested by Dehnen On the other hand, the process described by Dehnen & McLaughlin (2005) does not seem to be a viable explanation for the consistency of the Q ρ (r) and Q r,ρ (r) of S0s with simulationbased expectations for DM particles, as they appear to depart from the relation between PPSD slope and inner velocity anisotropy of Eq. (14). However, among the three morphological classes considered here, S0s show the strongest, albeit not very significant, deviation of the Q ρ and Q r,ρ profile slopes from the simulation-based expectations (see Fig. 5). In Paper I we argued that S0s are a transition class between the spiral and elliptical classes, as far as their dynamics within the cluster is concerned. Their velocity dispersion profile appears to be close to that of spirals near the center and to that of ellipticals in the outer regions. This is true not only for the line-of-sight velocity dispersion profile, as we noted in Paper I already, but also when considering the total, σ(r), and radial, σ r (r), profiles, as shown in Fig. 8. On the other hand, the ellipticals and spirals have very similar σ(r) and σ r (r), except for different normalizations, as expected from the similarity of the logarithmic slopes of their Q ρ and Q r,ρ profiles.
It is possible that S0s are not a homogeneous class, but a mixed bag of galaxies that formed in different ways at different epochs of the cluster evolution, namely by ram pressure stripping of disks (Gunn & Gott 1972) and by merger growth of bulges (van den Bergh 1990). The two formation channels of S0s is suggested by studies of their internal structure, gas content, and kinematics (Coccato et al. 2020;Deeley et al. 2020Deeley et al. , 2021, with disk stripping dominating in clusters and bulge growth in isolated galaxies (Deeley et al. 2020). So maybe the Q ρ and Q r,ρ profiles of S0s agree with simulation-based expectations (albeit less well than those of ellipticals and spirals) because some S0s followed the dynamical history of ellipticals and some that of spirals.
We are thus led to suggest the following conclusion. Q ν (r) and Q r,ν (r) keep memory of the accretion time of the cluster population, while Q ρ (r) and Q r,ρ (r) are related to the dynamical equilibrium of the population within the cluster potential, that is not necessarily achieved via violent relaxation only.
Discussion of results on num and tempX stacked clusters
We now turn to the results of our analysis using the other two stacking methods (to determine the virial radii): num (richness) and tempX (X-ray temperature). The tables and figures are displayed in Appendix A. . 9. Difference ∆ between the logarithmic slopes obtained for the num (circles) and tempX (crosses) scalings and the slopes obtained for the sigv scaling, for the three morphological classes, ellipticals (red), S0s (green), spirals (blue), for the different models (x axis). The ∆ differences are given in units of the quadratically combined dispersions of the slopes.
tempX scalings, respectively. One sees f l values as low or even a bit lower than 40%, depending on the model and the galaxy type, considerably lower than the > 95% obtained for the sigv scaling. This indicates that the ensemble cluster built using the sigv scaling has a (projected) phase-space distribution that is more similar to that of simulated halos, than the ensemble clusters built using the other two scalings. Another remarkable difference of the num and tempX scalings is that f l for Q r profiles is on average lowest for ellipticals among the three morphological classes, while it is lowest for S0s when considering the sigv scaling.
The marginal distributions of the best-fit logarithmic slopes of Q(r) and Q r (r) (considering only linear profiles) are displayed in Figs. A.3 and A.4 for the num scaling, respectively, (left panel: Q ρ , right panel: Q ν ) and in Figs. A.5 and A.6 for the tempX scaling. For the num scaling, we show in Figs. A.7 the averages and dispersions of the Q(r) and Q r (r) logarithmic slopes obtained on the MCMC chain elements (considering only linear profiles). Fig. A.8 shows the corresponding quantities for the tempX scaling. We also provide the average and dispersion of the logarithmic slopes of the Q and Q r profiles for all MCMC chain elements with linear PPSDs, for all models and all galaxy types in Tables A.1 and A.2 for the num and tempX scaling, respectively.
The results for the slopes of Q ρ (r) and Q r,ρ (r) obtained using the num and tempX scalings are generally within one standard deviation of the results obtained using the sigv scaling. This is illustrated in Fig. 9, where we show the differences ∆ between both the num-and the tempX-scaling slopes and the sigv-scaling slope, considering only linear profiles among all MCMC chains. The differences are shown in units of the quadratically combined dispersions of the slopes, σ slopes . These differences are not statistically significant. The most significant differences come from the Q r (r) slopes of ellipticals and spirals, which are almost iden- . Difference of mean logarithmic slope of Q r,ρ with logarithmic slope of Q ρ (here α = −1.8) as a function of difference in velocity anisotropies between virial radius and 0, using Eqs. (8) and (12) S0s also appear to be intermediate between ellipticals and spirals in the β 0 − α r diagram. As seen in Figs. A.9 and A.10, it is not the S0s, but the spirals that are the most distant from the expected relation, contrary to what was found using the sigv scaling. Moreover, the velocity dispersion profiles of S0s show less of a transition from those of spirals at small radii to those of ellipticals near the virial radius (Figs. A.11 and A.12) than is the case for the sigv stack (Fig. 8). The results for the num and tempX scalings therefore suggest that S0s are an intermediate class between ellipticals and spirals, rather than a mixed class. Another remarkable difference with respect to the sigv scaling, is that the β 0 − α r relation of Eq. (14) is not obeyed by any of the three morphological classes. This means we cannot rely on Dehnen & McLaughlin (2005)'s explanation for why later accreted galaxy populations such as the spirals, and to a lesser extent, S0s, have Q(r) profiles consistent with those of DM particles.
Not only are the Q r,ρ profiles obtained using the num and tempX scalings flatter than simulations predict for DM particles, they are in some cases even flatter than the Q ρ profiles. This can happen if the velocity anisotropy profiles are more radial near the center than at the cluster virial radius, as illustrated in Fig. 10. Anisotropy profiles of this kind are not typical of either simulated cluster-size halos (e.g., Ascasibar & Gottlöber 2008;Mamon et al. 2010;Lemze et al. 2012;Munari et al. 2013;Lotz et al. 2019) or real clusters (e.g. Natarajan & Kneib 1996;Biviano & Katgert 2003;Lemze et al. 2009;Wojtak & Łokas 2010;Biviano et al. 2013;Annunziatella et al. 2016;Aguerri et al. 2017;Capasso et al. 2019). This suggests that one should take the results obtained using the num and tempX scalings with some caution.
In conclusion, while the results we obtain for the num and tempX scalings are not significantly different from those obtained for the sigv scaling, they are more distant from the predictions from numerical simulations for what concerns the linearity of the profiles and the slope of Q r,ρ (r). If the power-law behavior of Q(r) and Q r (r) could be theoretically motivated, the better adherence of the sigv-based profiles to the power-law be-havior would suggest that the velocity dispersion is a better r 200 estimator than either the cluster richness or its X-ray temperature, at least for the WINGS cluster data set.
Summary and conclusions
We determined the average Q and Q r profiles of nearby galaxy clusters, using either total mass density ρ(r) or tracer number density ν(r), as well as the velocity dispersion profiles of three galaxy classes, ellipticals, S0s, and spirals. For this, we have used the results of the MCMC analysis of the kinematics of a velocity-dispersion based (sigv) stack of 54 regular clusters (Paper I) from the WINGS dataset (Fasano et al. 2006;Cava et al. 2009;Moretti et al. 2014) performed with the MAMPOSSt code in Paper II.
We find that Q ρ (r) and Q r,ρ (r) are very close to the powerlaw relations predicted by numerical simulations for DM particles (Taylor & Navarro 2001;Rasia et al. 2004;Dehnen & McLaughlin 2005), at least in a range from a few percent to one virial radius. On the other hand, Q ν (r) and Q r,ν (r) agree with the simulation-based predictions for DM particles only for the ellipticals, and deviate marginally and significantly from the simulation-based predictions for the S0s and spirals, respectively. Only the spiral Q ν (r) is similar to that of subhalos in halos from cosmological hydrodynamical simulations.
We checked our results on two different stacks of the same data set, based on richness (num) and gas temperature (tempX) scalings. While we find a lower fraction of power-law Q and Q r profiles, the average slopes of these profiles are not significantly different from those obtained for the sigv scaling.
We argue that our results based on the sigv scaling support a scenario in which Q ρ (r) and Q r,ρ (r) are either established early on, during the cluster violent relaxation phase, for the DM and ellipticals, or established subsequently, for spirals by adapting their orbital and spatial distribution as they move towards dynamical equilibrium in the cluster potential. S0s might be a mixed class, part of them following the dynamical history of ellipticals, and the other part, that of spirals, as suggested by our analysis of the sigv stack, or an intermediate class between spirals and ellipticals as consistent with our analysis of the num and tempX stacks. Q ν (r) and Q r,ν (r) are not universal, and depend on the time of accretion of the tracer population in the cluster.
In conclusion, our results give strong observational support to the simulation-based power-law Q and Q r profiles when they are defined using total mass density ρ(r) rather than the tracer number density ν(r).
A. Biviano and G. A. Mamon : Structural and dynamical modeling of WINGS clusters
Appendix A: Results for the num and tempX scalings
In the main text of this paper, we provided the results for the velocity dispersion-based sigv scaling used to stack the clusters. Here we provide the results for the richness-based, num, and Xray temperature-based, tempX scalings. Notes. Columns labelled ' f l ' give the fraction of linear MCMC Q profiles. Columns labelled 'slope' give the average and dispersion of the slopes of the MCMC Q profiles with f l > 0.1. Rows labelled "mean" gives the weighted mean and dispersion of all the models, using the slope dispersions as weights. | 2022-12-02T06:41:47.546Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "a70bbd3e38d26a04633f4b62c2a6e18e214871a9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a70bbd3e38d26a04633f4b62c2a6e18e214871a9",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
258030203 | pes2o/s2orc | v3-fos-license | Texto & Contexto Enfermagem
Objective: mapping the scientific evidence about the use of educational technologies for caregivers in the context of Pediatric Oncology hospital units. Method: this is a scoping review based on the PRISMA-ScR recommendations and on the Joanna Briggs Institute methodology. The search was performed by two independent reviewers in 12 national and international data sources. Publications available in full and free of charge in electronic means were included, with no language or time restrictions. Abstracts were excluded, as well as letters to the editor, opinion articles, books, monographies, dissertations, theses, blog postings, and theoretical and reflection articles. Data analysis was descriptive, with elaboration of charts and absolute and relative frequency statistics. Results: the final sample was comprised by 15 studies published between 2010 and 2020 and mainly from developed countries. Apps and videos were the predominant educational technologies, followed by printed materials, contributing to increasing the caregivers’ knowledge about the disease and cancer treatment, symptom management and side effects of the chemotherapy drugs. In addition, when compared to printed materials, the videos showed a reduction in the caregivers’ anxiety levels. The professionals most involved with the technologies were nurses and physicians. Conclusion: it was possible to map that apps and videos are the main educational technologies that are being developed to instruct caregivers, addressing diagnosis and treatment of child-youth cancer, symptom management and self-care promotion. DESCRIPTORS: Educational technology. Caregivers. Neoplasms. Child. Adolescent. Hospital units.
INTRODUCTION
The field of Pediatric Oncology involves providing care to children and adolescents with cancer, which affects the child-youth population aged from zero to 19 years old.In this light, child-youth cancer can be understood as a group of several diseases that have in common the proliferation of abnormal cells, mainly affecting blood cells and supporting tissues; for that reason, the predominant types are leukemias (28%), Central Nervous System tumors (26%) and lymphomas (8%) 1 .
It is noted that, each year, cancer affects more than 300,000 children/adolescents at the global level, with an increase in underdeveloped countries 2 .In Brazil, child-youth cancer is the leading cause of death (8% of the total) due to disease among children and adolescents aged from 1 to 19 years old, only surpassed by deaths due to external causes 3 .
In addition, child-youth cancer does not present risk factors well evidenced in the literature, as in the case of adults, and the main symptoms can be similar to other pathologies, which contributes to delays in diagnosis.Added to the statistics, this fact turns child-youth cancer into a major challenge for Pediatric Oncology services, mainly for the hospital units that are involved with the diagnosis, treatment and monitoring of children/adolescents, as well as their families 2 .
In this context, child-youth cancer causes significant repercussions for the life not only of the child/adolescent, but for their caregiver, who goes a long way until the diagnosis is made, needing to change their routine to adapt to a new reality 4 .It is evidenced that, at treatment initiation, caregivers lack information both about the disease and regarding its treatment.In addition to that, due to the impact caused by the child-youth cancer diagnosis, there is difficulty on the part of the caregiver to associate the guidelines that are provided by the health professionals 5 .
In this way, cooperative groups that study child-youth cancer recognize the importance of health education for patients, family members and caregivers in Pediatric Oncology services, and encourage the development of research studies aimed at facilitating understanding by those involved in the cancer treatment process, as well as they also highlight nurses' role as educators of the patient/family [6][7][8][9][10] .
From this perspective, the use of educational technology emerges, which allows incorporating resources that can contribute to strengthening the teaching-learning process and achieve educational goals 11 .Thus, in the form of printed materials or audiovisual resources, educational technologies have been identified as essential for the development of activities related to health education, contributing to knowledge acquisition by patients, family members and caregivers in different practice contexts 12 .
In Pediatric Oncology, educational technologies can be incorporated at the beginning of the cancer diagnosis and treatment or before performing procedures that are necessary during the treatment of child-youth cancer, in order to expand the knowledge of the patient, family member and caregiver, ease learning and reduce the anxiety related to lack of knowledge about the disease.In view of the importance of educational technologies, it is necessary to discuss more about their incorporation in Pediatric Oncology hospital units, given that this is where caregivers spend most of their time during the cancer treatment of children/adolescents 13 .
Based on the above, the study is justified due to the need to investigate the educational technologies developed and that are used to elucidate the gaps in the knowledge of caregivers of children and adolescents with cancer in Pediatric Oncology services.Thus, the study aims at mapping the scientific evidence about the use of educational technologies for caregivers in the context of Pediatric Oncology hospital units.
METHOD
This is a Scoping Review developed following the guidelines set forth in the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) international guide 14 and those by the Joanna Briggs Institute (JBI) Reviewers Manual 15 , with its research protocol registered in the Open Science Framework platform (https://osf.io/jnfz9/).
The following stage structuring was chosen to devise this study, as conceived by Arksey and O'Malley: (1) definition of the research question; (2) identification of relevant studies; (3) selection and inclusion of studies; (4) data organization; and (5) compilation, synthesis and report of the results 16 .
It is noted that, as a first step, a survey was conducted in the scientific bibliography to detect reviews with a similar research scope.The following platforms were consulted: International Prospective Register of Systematic Reviews (PROSPERO), Open Science Framework (OSF), The Cochrane Library, JBI Clinical Online Network of Evidence for Care and Therapeutics (COnNECT+) and Database of Abstracts of Reviews of Effects (DARE).The results revealed non-existence of publications with a similar objective as the one of this review.
The PCC (Population, Concept and Context) mnemonic rule was used to formulate the research question, as indicated by the JBI.Such being the case, the following was defined: P -Caregivers of children and adolescents; C -Educational technologies in health; and C -Hospital units.From this starting point, the following research question was formulated: "which are the educational technologies developed for caregivers in the context of Pediatric Oncology hospital units?".
The search for articles was conducted in August and September 2021, in the following databases: PubMed, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Scopus, Web of Science, Science Direct, Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS), Cochrane Library, Wiley Online Library, Gale Academic Onefile and Google Scholar, as well as in the Scientific Electronic Library Online (SCIELO).For the Gray Literature (theses and dissertations), the Theses and Dissertations Catalog of Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) was used.The search was carried out on the Journals Portal of Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), through remote access via the Comunidade Acadêmica Federada (CAFe) platform, a tool made available by Universidade Federal do Rio Grande do Norte (UFRN).
It is noted that the search strategy was adapted according to the specificities of each source used; however, the combinations between the descriptors were kept and time and language restriction filters were not added, as shown in Chart 1.
In order to select the articles, the following inclusion criteria were adopted: publications available in full and free of charge in electronic media, without language time restrictions.Abstracts were excluded, as well as letters to the editor, opinion articles, books, monographies, dissertations, theses, blog postings, and theoretical and reflection articles.
Initially, the methodological process followed to select and include the studies consisted in identifying the publications in the sources using the inclusion and exclusion criteria.Screening and inclusion of studies were performed by two independent evaluators, simultaneously and on different electronic devices, in addition to reading the selected studies in full.The differences found between
TITLE-ABS-KEY (caregivers) AND TITLE-ABS-KEY (child OR adolescent) AND TITLE-ABS-KEY (health AND education OR educational AND technology) AND TITLE-ABS-KEY (medical AND oncology OR neoplasms OR cancer) AND TITLE-ABS-KEY (hospital AND units OR health AND services)
Web of Science the reviewers during the selection process were mediated through meetings between them and, after a discussion, it was decided to include the study in the review or to exclude it.It is also noted that a reverse search was performed in the references of the articles selected to identify possible relevant studies to comprise the results.After selecting the studies, for data organization and extraction , the authors created a spreadsheet in Microsoft Excel ® with information such as: author and year of the study, country of publication, method design, intervention used, type of educational technology, professionals involved with the educational technology, and main results of the studies.
6/15
Regarding the level of evidence and degree of recommendation, according to the Oxford Center for Evidence-based Medicine, the lower the number presented by the study, the better its level of evidence, while studies with an "A" rating are considered to be of higher relevance, presenting a higher degree of recommendation 17 .
Data analysis was descriptive, with elaboration of charts and absolute and relative frequency statistics.
It is noted that this study was not submitted to any Research Ethics Committee (Comitê de Ética e Pesquisa, CEP), as all the data included in this review are of public access.
RESULTS
A total of 19,034 studies were identified in the data sources; however,11,258 articles were not accessible, leaving 7,771 for the screening process.Subsequently, the titles and abstracts of the articles were read, with exclusion of 7,714 because they were not related to the theme, totaling 57 articles assessed for eligibility.After reading the articles' full texts, 46 were excluded for not answering the research question, thus leaving 11 articles.It is noted that four articles were included in the reverse search, totaling a final sample comprised by 15 articles, as presented in the flowchart (Figure 1).
Regarding the year of publication, 2015 stood out for representing four (26.6%) of the studies selected, followed by 2019 with three (20%) studies, 2010, 2016 and 2020 with two (13.3%)studies each, and 2017 and 2018 with one (6.6%)study each.
7/15
Furthermore, in relation to the country of origin of these articles, there was prevalence of the United States of America (USA), which obtained four (27%) publications, followed by the United Kingdom with three (20%), Brazil with two (13.3% ), Iran with two (13.3%) and Germany, Australia, Chile and Indonesia with one (6.6%)publication each.
The study design of the experimental type corresponded to the predominant class in the sample, representing six (40%) articles, followed by descriptive studies with four (26.5%) articles, scoping reviews with two (13.3%)articles, and systematic, cohort and methodological studies with one (6.6%)article in each category.
In addition to that, according to the methodology adopted in each study, there were five (33.3%) publications with level of evidence 2C, four (26.6%) with 1B, three (20%) with 2A, and three (20%) with 2B.In relation to the degree of recommendation, 11 (73.3%) and four (26.6%) publications obtained degrees B and A, respectively, as shown in Chart 2. Regarding the type of educational technology, there was predominance of apps and videos, obtaining five (33.3%) and three (20%) results, respectively.The others corresponded to social media platforms, a manual, a mobile technology, an interactive game, a software program, lectures and a booklet, each one representing 6.6%.
Regarding the professionals involved with the educational technology, seven (58.3%) nurses and physicians were among the results, in addition to a multidisciplinary team, scientists, Information Technology professionals and software engineers, with one (6.6%)result for each category.The type of technology and the professionals involved, as well as the main results of the studies that used these technologies, are described in Chart 3.
Professionals involved
Main results Multimedia materials, such as educational videos, are useful for learning and reducing anxiety in caregivers of children/adolescents with cancer, when compared to the conventional methods applied before intrathecal chemotherapy.
E2 18 Social media platforms Physicians Cancer treatment optimization at all the support levels, from provision of information and adherence to the treatment to diet and interventions with physical exercises.
E3 19 Video Nurses Considerable increase in the caregivers' knowledge after multimedia-based education.
E4 20 Manual Nurses Increase in the caregivers' knowledge; training about the chemotherapy treatment and its effects should be provided before initiating chemotherapy.
E5 21 App Physicians It increases patients' and family members' access to education and to reliable and suitable information about the disease.
Mobile technology Physicians
The caregivers wanted medical knowledge and management apps, as well as for symptom management and medication reminders; most of the caregivers use the mobile technology with minimal barriers.
Educational interactive game Nurses
An effective strategy to teach self-care behaviors and interact with the patients, reducing the fear and anxiety related to the effects of chemotherapy.
Software program Nurses
It assists nurses in devising guidelines for caregivers; it improves quality of the information; it eases interpreting and adhering to the recommendations.
App Multidisciplinary team
Uncertainty at the beginning of diagnosis is perceived globally, and apps aimed at education in health offer a potential benefit for the family members of children with cancer.
Lectures Manual Nurses
Increase in the knowledge of the parents that were offered the educational program; There was no effect on the anxiety levels.
Chart 3 -Mapping of the results according to identification of the studies, type of educational technology, professionals involved in development of the technology and main results.Natal/RN, Brazil, 2022.
9/15
The studies showed that the use of educational technologies such as videos, apps and printed materials increases caregivers' knowledge about the cancer diagnosis and treatment process.When parameters such as reduction in the caregivers' anxiety were evaluated, the videos showed a significant result when compared to the printed materials.
DISCUSSION
The results pointed to the main educational technologies that are being produced and gradually introduced in Pediatric Oncology hospital units, emphasizing the collaboration for the health education process, as they ease understanding and reinforce the guidelines provided by the professionals [32][33] .
In relation to the development and use of technologies for health education, developed countries have shown significant progress, where there is more willingness to employ modern technologies in the health area, which can be observed in this review, where most of the educational technologies described were from articles belonging to developed countries, such as the USA and the United Kingdom 9,20,22,25,27 .
It is worth noting that studies carried out in Brazil on educational technologies have also been published, mainly involved with the development of booklets and software programs 32 .However, there is an evident need to broaden the health professionals' view for the production of educational technologies, mainly in developing countries, given the reduced number of publications that were evidenced in the current study.
In the Pediatric Oncology context, educational technologies aimed at clarifying the disease and treatment are being used mainly by nurses and physicians.It is noted that nurses play a role as health educators and are directly involved with the guidelines on chemotherapy treatment, side effects, hygiene habits, food and general care measures, developing resources that assist in the health education process.Thus, nurses' role as educators can be attributed to their greater involvement with the use of educational technologies 34 .
Professionals involved Main results
E11 27 App Physicians and information scientists Support for Clinical management; Monitoring of the chemotherapy symptoms; Self-care promotion.
App
Physicians, nurses and software engineers The caregivers appreciated the idea of using a smartphone app to gain more knowledge and receive more support; need for information and knowledge to care for the children at home.
Information Technology professionals
It offers structured and personalized information about the late effects of childhood cancer and monitoring examinations.
E14 30 Booklet Nurses Importance of enjoying access to the material at treatment initiation; all the information is considered positive and clarifying.
E15 31 Video Physicians Structured information about leukemia, its treatment and chemotherapy; treatment refusal decreased and event-free survival was significantly increased among poor families; better knowledge is still required to manage treatment toxicity.
*ID: Identification of the article.
10/15
Another point to be highlighted is the incorporation of Digital Information and Communication Technologies (DICTs) in health education.Both medical and nursing education increasingly interact with the use of technologies that stimulate learning.Thus, since training, physicians and nurses have direct contact with the DICTs, facilitating the use of educational technologies in the work environment [35][36] .
Among the educational technologies, apps and videos were the most cited in the publications.It is noted that health technologies, such as apps, are gaining popularity and have the potential to be beneficial to patients and family members due to their easy access, low cost and availability of reliable information about the disease and treatment 21,25 .
The main apps that are being developed for caregivers address the health education process on cancer treatment, symptom evaluation, social support, clinical management, monitoring of chemotherapy side effects and self-care promotion 21,25,[27][28][29] .
It is also evidenced that the families of children/adolescents with cancer wanted to have access to an app that offered information about the disease and treatment of their children 25 , as well as for the management of symptoms and medications 22,28 .Apps that allow parents to communicate with health professionals, with answers to questions and recommendations, promote parents' self-confidence to care for children with cancer 21 .
In addition to apps, it is noted that, by using appealing and dynamic resources, educational videos are also important tools for the health education process in Pediatric Oncology, contributing to learning in a playful way 31 .Using videos contributes to autonomy and to the adoption of care practices, mainly due to the interaction with images and sound, making it possible to understand the content presented 7,19,31 .
The themes addressed by the videos were related to health education for the diagnosis of child-youth cancer and cancer treatment 7,19,31 .From this perspective, the study by Mostert et al. (2008) 31 stands out, in which he compared the outcome of the Acute Lymphoblastic Leukemia (ALL) treatment before and after the introduction of a parental education program, using an educational video to instruct the caregivers.It was observed that treatment refusal was reduced and that eventfree survival increased significantly.
A difference is noticed between the studies regarding the reduction of anxiety with the use of educational technologies.Through the use of resources such as educational videos and games, it was possible to achieve a significant reduction in anxiety 7,23 .However, when printed materials were applied, there was no significant effect on the caregivers' anxiety level 26 .
It is noted that there are other factors that can influence the level of anxiety in caregivers of children with cancer; for example, social, economic and family factors must be taken into account during the educational interventions.Another important point is related to choice of the educational technology, in which it is necessary to observe the needs and the context in which the individual is inserted 7 .
In addition, there was an increase in the caregivers' knowledge about the effects of chemotherapy, risk of infection, oral hygiene and adequate nutrition for children/adolescents with cancer 18,20 .
It is noted that the caregivers showed minimal barriers to using apps, videos and printed materials; for example, the data limitation for accessing mobile technologies.They showed interest in receiving information and knowing more about the disease and treatment, as well as about caring for their children when they were discharged home 19,22,26,28 .
11/15
Some apps and videos were still in their development phase or were being evaluated as to the caregiver's interest in using a tool as support during cancer treatment, and it was necessary to enable their implementation in Pediatric Oncology services 22,[28][29] .
It should be noted that there are still few interventions using educational technologies for caregivers of children, with the need to develop resources that may support the parents and families that care for children with cancer 19,28 .
In summary, the results of this review collaborate to the scientific community, as they allowed mapping the main studies on the theme, highlighting the importance of developing and incorporating educational technologies in Pediatric Oncology services to ease the health education practice, as well as to contribute for greater participation of caregivers in coping with the disease and cancer treatment.This review can help health professionals devise educational strategies that ease communication of the guidelines aimed at the diagnosis and treatment of child-youth cancer.
The following are pointed out as study limitations: the number of articles unavailable for access, which may have reduced the sample.In relation to the limitations on the use of educational technologies, it is noted that, for the apps and software programs, it is necessary for hospital units to have Internet access for caregivers.In addition, it is necessary to implement measures to prevent infection when handling technological devices.
CONCLUSION
The study allowed mapping that apps and videos are the main educational technologies that are being developed to instruct caregivers of children and adolescents in Pediatric Oncology hospital units.The main themes of educational technologies involved health education focused on the diagnosis and treatment of child-youth cancer, evaluation and management of symptoms and side effects of chemotherapy, and self-care promotion.
When compared to printed materials, videos and apps have significant potential to meet the caregivers' needs, given that they provide information in a dynamic way, using resources such as animation and sound to fix the content, which allows for knowledge acquisition.
However, it is noted that most of the educational technologies described in the study were being tested with the caregivers, not allowing to highlight which specific platform, video and app best meets the needs of patients, caregivers and family members.
As for the results of the studies, educational technologies expand caregivers' access to health education, contributing to enhanced caregiver confidence in facing the challenges associated with cancer treatment.Thus, it is necessary that these technologies be incorporated into Pediatric Oncology services, especially in developing countries, where there is a significant increase in child-youth cancer, showing that the theme needs to be explored to enable the implementation of technologies in Pediatric Oncology services.
Chart 2 -Characterization of the publications according to the year of publication, country of origin, data source, type of study, level of evidence and degree of recommendation of the studies included in the scoping review.Natal/RN, Brazil, 2021.(N=15) *ID: Identification of the article; †LE = Level of Evidence; ‡DR = Degree of Recommendation 8/15 | 2023-10-06T17:13:21.518Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "b2d08316a2e2aac588fbf8c9bdce6bf991849ece",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/tce/a/S8MtB8mgZNwKr7RWLz8kpnN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b2d08316a2e2aac588fbf8c9bdce6bf991849ece",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": []
} |
267632341 | pes2o/s2orc | v3-fos-license | Association of Cytogenetics Aberrations and IGHV Mutations with Outcome in Chronic Lymphocytic Leukemia Patients in a Real-World Clinical Setting
Immunoglobulin heavy chain variable ( IGHV ) region mutations, TP53 mutation, fluorescence in situ hybridization (FISH), and cytogenetic analysis are the most important prognostic biomarkers used in chronic lymphocytic leukemia (CLL) patients in our daily practice. In real-life environment, there are scarce studies that analyze the correlation of these factors with outcome, mainly referred to time to first treatment (TTFT) and overall survival (OS). This study aimed to typify IGHV mutation status, family usage, FISH aberrations, and complex karyotype (CK) and to analyze the prognostic impact in TTFT and OS in retrospective study of 375 CLL patients from a Spanish cohort. We found unmutated CLL (U-CLL) was associated with more aggressive disease, shorter TTFT (48 vs. 133 months, p < 0.0001), and shorter OS (112 vs. 246 months, p < 0.0001) than the mutated CLL. IGHV3 was the most frequently used IGHV family (46%), followed by IGHV1 (30%) and IGHV4 (16%). IGHV5-51 and IGHV1-69 subfamilies were associated with poor prognosis, while IGHV4 and IGHV2 showed the best outcomes. The prevalence of CK was 15% and was significantly associated with U-CLL. In the multivariable analysis, IGHV2 gene usage and del13q were associated with longer TTFT, while VH1-02, +12, del11q, del17p, and U-CLL with shorter TTFT. Moreover, VH1-69 usage, del11q, del17p, and U-CLL were significantly associated with shorter OS. A comprehensive analysis of genetic prognostic factors provides a more precise information on the outcome of CLL patients. In addition to FISH cytogenetic aberrations, IGHV and TP53 mutations, IGHV gene families, and CK information could help clinicians in the decision-making process.
Introduction
Chronic lymphocytic leukemia (CLL), the most frequent adult leukemia in the Western countries, shows a heterogeneous clinical course that reflects differences in disease biology.One-third of CLL patients has an indolent disease, with a life expectancy similar to that of age-matched healthy individuals; other patients have a benign phase of 3 to 10 years, after which the disease progresses, and approximately 15% of patients have an aggressive disease, with a dismal clinical outcome despite therapy. 1,2Therefore, some patients require early treatment, while others only need a periodic follow-up.Multiple clinical and laboratory prognostic markers of CLL have been applied so far to try to predict the clinical course and outcome of this disease, highlighting Rai et al 3 and Binet et al 4 clinical staging systems, chromosomal abnormalities detected by fluorescence in situ hybridization (FISH), recurrent gene mutations, and immunoglobulin heavy chain variable (IGHV) locus gene mutation status.The complex karyotype (CK) as defined by !3 chromosomal abnormalities by conventional cytogenetics with stimulation techniques has emerged in the past years as an adverse prognostic and predictive marker not only to chemoimmunotherapy (CIT) treatments but also to novel agents. 5,6Currently, the definition of CK in CLL is under discussion, since patients with five or more alterations do have a worse prognosis, which is not so evident in those who have three or four cytogenetic aberrations. 5he IGHV mutation status is one of the most robust prognostic factors in CLL with a well-known ability to predict time to first treatment (TTFT), progression-free survival, and overall survival (OS). 7,8Based on IGHV gene mutational status, CLL can be divided into mutated (M-CLL) and unmutated (U-CLL), with an arbitrary value of a 2% deviation from, or <98% identity with, the corresponding germline sequence.Though this classification is almost universal, some M-CLL cases were found to be more aggressive than expected, presenting a percentage of "borderline" mutations (97-97.9%IGHV identity) and, therefore, are intermediate between U-CLL and M-CLL. 9M-CLL is associated with better clinical outcomes than U-CLL.This has been confirmed by numerous retrospective studies, observational studies from real life, clinical trials, and meta-analysis. 7,8,10,11In addition, IGHV mutation status is one of the biomarkers included in the CLL-IPI, and current guidelines recommend its determination in every patient before treatment.However, unlike TP53 mutation, IGHV mutation should only be performed once due to its immutability [12][13][14] ; therefore, IGHV mutation is important not only to establish prognosis but also for appropriate therapeutic decision-making in the age of new drugs; thus, most current guidelines include IGHV mutational status in treatment algorithms.The determination of this mutational state requires next-generation sequencing or reverse transcription polymerase chain reaction (PCR) techniques.New techniques are currently being explored in case molecular biology cannot be performed, such as multiparametric flow cytometry, with encouraging preliminary results. 15yond mutation status, selective usage of individual IGHV genes has also been described in CLL, with a different distribution among gene rearrangements between different countries and an overuse of certain genes.For instance, IGHV3 is the most frequent subgroup followed by IGHV1 in Mediterranean countries, while IGHV4 is more prevalent in China.7][18] Furthermore, specific used genes are associated with mutation status or even clinical outcome, such as IGHV3-21 which harbors bad prognosis despite its association with mutation status 19,20 and published stereotyped subset #2, with poor results in both M-CLL and U-CLL. 21espite the meticulous characterization that has been made regarding to the IGHV families, due to the large number of used genes, a limited number of studies have focused on analyzing the prognosis they provide and their interaction with other prognostic factors, except for exceptional cases such as IGHV3-21.In this study, we retrospectively analyzed a large series of 375 unselected CLL patients, studying the relationship between IGHV gene usage and mutation status, FISH abnormalities, and conventional cytogenetics, including CK.We also assessed the prognostic impact of IGHV gene usage on TTFT and OS in our series, regardless of the treatment received.
Patients
We performed a retrospective multicenter analysis of a Spanish cohort of patients diagnosed with CLL from the electronic database of Cancer Research Center (Centro de Investigación del Cáncer-CIC), Salamanca, Spain.A total of 375 patients with comprehensive information about IGHV mutation status, family usage of IGHV and FISH analysis were included in this study.The laboratory data were exclusively collected at diagnosis.The diagnosis was based on the World Health Organization classification for CLL 22 and the International Workshop on Chronic Lymphocytic Leukemia (iwCLL) guidelines. 23Clinical and biological variables include age, sex, Rai et al and Binet et al stages, lymphocytosis, somatic mutations of the IGHV gene, genetic abnormalities determined by FISH: deletions of 11q (del11q), 13q (del13q), 17p (del17p), trisomy 12 (þ12), and karyotyping cytogenetic analysis.This study was performed in accordance with national and international guidelines (Declaration of Helsinki) and approved by the local ethics committees.
IGHV Mutational Status
Analysis of the IGHV mutational status was performed locally at CIC laboratory, on peripheral blood CLL cell from fresh samples in tubes with ethylenediaminetetraacetic acid.IGHV gene rearrangements were amplified by reverse transcription-PCR in accordance with the European Research Initiative on CLL (ERIC) recommendations. 24Mutation rates of !2% difference from germline were considered mutated, while unmutated disease had a <2% mutation rate.
FISH
Clonal cytogenetic aberrations were studied by FISH analysis at CIC laboratory from peripheral blood samples obtained at diagnosis, using commercially available tests for detection (þ12), and for del11q, del13q, and del17p (Vysis/Abbott Co., Downers Grove, Illinois, United States).Signal screening was carried out in at least 200 nucleated cells with well-delineated fluorescent spots.The sensitivity limit for the detection were >5 and >10% interphase cells with three signals and one signal, respectively, according to the cutoff from the laboratory.
Cytogenetic Analysis
Cytogenetic analysis was also performed at CIC laboratory on peripheral blood samples.Cells were stimulated with CpG oligodeoxynucleotides and analyzed according to standard laboratory procedures.CK was defined by the presence of three or more chromosome abnormalities (numerical and/or structural) in the same clone, 5,25 and all types of alterations have been taken into account (unbalanced and balanced translocations, chromosomes addition, insertion, duplications, deletions, monosomies, or trisomies).We identified three subtypes of karyotypes: normal karyotype (NK), altered karyotype (AK): with one or two chromosomal abnormalities, and CK: at least three independent chromosomal abnormalities.CK cases with additional þ12, þ19, and þ18 were not analyzed separately in the study. 26
Statistical Analysis
The description of the quantitative values was made through the descriptive statistics of the median of the standard deviation and the 95% confidence interval.Fisher's exact test was used to detect statistically significant relationships between the categorical variables.To test statistically significant differences in continuous variables of scale, ratio, or interval, the Student's t test will be applied.Survival analysis was performed using Kaplan-Meier's curves for univariate analysis and Cox's regression for multivariate analysis.TTFT was calculated as the interval between diagnosis and the beginning of first-line treatment.OS was calculated from the time of diagnosis to death or to the last follow-up visit.Statistical analysis was performed using the program SAS v 9.4 and SPSS v 21.
(p ¼ 0.051).The IGHV2 family had more cases with a M-CLL profile (67%), although the differences were not statistically significant.Notably, all cases of the IGHV5 family belonged to the IGHV5-51 subgroup, with a significant association with U-CLL (73%, p ¼ 0.022).
Genomic Aberrations Detected by FISH, IGHV Mutation Status, and Family Usage
Next, we analyzed the incidence of cytogenetic aberrations detected by FISH (del11q, del13q, del17p, and þ12) according to the IGHV mutation status (►Table 3).In our study, 100% of patients were carried out FISH and 62% of the patients harbored FISH alterations.As expected, del13q and normal FISH occurred more frequently in M-CLL patients (p < 0.0001 in both cases).By contrast, del11q (p ¼ 0.0013), del17p (p ¼ 0.002), and þ12 (p ¼ 0.0003) were associated with the U-CLL subgroup.Interestingly, IGHV mutation status had a significant impact on the outcomes among the different specific FISH subgroups (►Table 3).Patients with isolated del13q and M-CLL had longer TTFT and OS than patients with del13q and U-CLL (►Supplementary Fig. S1A, B).Conversely, IGHV mutation status did not influence TTFT and OS of patients harboring þ12 (►Supplementary Fig. S1C, D).We did not find differences in TTFT and OS in patients with del11q and del17p (►Supplementary Fig. S1E-H), but the number of cases was low in these cytogenetic alteration groups.
Relationship between Complex Karyotype, IGHV Mutation Status, and Family Usage
The relationship between CK, IGHV mutational status, and family usage was restricted to the 142 patients with karyotype information.NK was observed in majority of the cases (56%), followed by AK (29%) and CK in 22 patients (15%).A significant association between NK and M-CLL was detected in 61/79 patients (77%, p < 0.0001), while patients with CK had a significant association with U-CLL in 15/22 patients (68%, p ¼ 0.001).
Moreover, within the subgroup of patients with CK, U-CLL conferred a shorter TTFT and more aggressive disease than for M-CLL (p ¼ 0.0195) (►Supplementary Fig. S2A, B).
A biased usage of IGHV genes was detected in the CK subgroup, with a preference for IGHV1 family (11/22 patients, 4 CK belonged to the family IGHV1-69 and 4 to the IGHV1-02), followed by IGHV4 and IGHV5 (►Fig. 3).None of the cases belonging to the IGHV2 family had a CK.
Outcome, IGHV Mutation, Family Usage, and Genomic Abnormalities
As expected, TTFT was significantly longer in patients with M-CLL compared with U-CLL (133 vs. 48 months, p < 0.0001) In addition, we analyzed the impact of IGHV families, rearrangements, IGHV mutation status, FISH abnormalities, and CK on disease outcome.Due to the small size of some VH segment populations, we only included those with more than 10 cases.As emphasized in ►Supplementary Table S1, in the univariate analysis, the variables significantly associated with shorter TTFT were IGHV1, VH1-02, VH1-69, VH5-51, þ12, del11q, del17p, CK, and U-CLL.Conversely, IGHV2 and del13q were significantly associated with a longer TTFT.Del11q, del17p, and U-CLL patients were related with shorter TTFT, as expected.In the multivariable analysis IGHV2, VH1-02, del11q, del17p, þ12, and U-CLL were related with worse TTFT.Regarding OS, IGHV-1, IGHV1-69, del11q, del17p, þ12, and U-CLL were also significantly associated with worse outcome, while IGHV2 and del13q were associated with good prognosis in the univariate analysis.However, only VH1-69, del11q, del17p, and U-CLL were the variables associated with shorter OS in the multivariable analysis.
Discussion
In this study of a large Spanish series of CLL patients with information about IGHV rearrangements (n ¼ 375), we analyzed the frequency and correlation of IGHV gene usage with other genetic variables, including FISH cytogenetic aberrations and CK, and clinical outcome.Previous studies have found a significant impact of IGHV mutation status on the prognosis of patients with CLL. 7,18,20,27However, the relationships of IGHV gene usage with genomic aberrations by FISH and cytogenetic complexity as a biomarker at diagnosis are less frequent.
First of all, we confirmed the preferential use of IGHV3 (46%) followed by IGHV1 (30%), IGHV4 (16%), IGHV2 (3%), and IGHV5 (3%).Our results are comparable with those observed in the populations of other Western countries which confirm usage of subfamilies with different geographic pattern among countries. 7,18,20,27,28Within the IGHV3 family, the most frequently found in our study, the distribution of the subfamilies in the study is similar to that of other published groups in the southern European region. 19In our series, the IGHV3 family was more associated with M-CLL as expected.The most frequent subfamily was IGHV3-23, most of them associated with M-CLL and showed a short TTFT than patients without this usage.Moreover, IGHV3-21 is more common in Northern and Central Europe and Scandinavian CLL population, [28][29][30] and it is more infrequent in Southern European countries, 16,20 probably due to this reason we had a low frequency in our study (2.6%).In this family, we found a higher frequency of M-CLL cases, similar to previous reports.IGHV3-21 family has been associated with an unfavorable prognosis independently of the IGHV mutational status, [28][29][30][31][32] but we could not confirm this result due to the small number of this subgroup in our cohort.As a novel finding, we identified IGHV3-11 as a usage associated with dismal prognosis, with most of these patients belonging to the U-CLL subgroup and showing a shorter TTFT and OS than patients without this usage, is in line with previous work from our group. 16,33The significance of these results could not be proved and should be taken cautiously due to the low representation of this subfamily in our study.
IGHV1 usage, regardless of mutational status, was associated with a worse prognosis and worse results than the rest of the families, with majority of U-CLL cases.The most frequently found subfamily was IGHV1-69, similar to other studies carried out in countries of the Western environment. 34As described in other series, we confirmed that IGHV1-69 distinguishes a uniformed group of patients with adverse outcome. 35In our study, we observed a significant relationship with U-CLL and a lower OS than patients without this family, and the multivariable analysis showed a strong association with worse survival (►Supplementary Table S1).
With respect to the IGHV4 family, the patients more frequently had a mutated pattern.Globally, this group presented with a long TTFT compared with the rest of the patients, especially in the IGHV4-34 subfamily, the most common in our study and in other similar ones. 36Interestingly, in patients with IGHV4, we did not find poor prognosis FISH alterations (del11q and del17p), as previous reports 37 and conversely, del13q alone was observed in half of the patients.Our study further expands the evidence suggesting that this subset represents a group of patients with indolent disease.
In relation to the families found less frequently in our study, in the family IGHV2, 67% of cases were associated with M-CLL, with differences in TTFT and OS in univariable analysis, but not in multivariable analysis, probably due to low representation of IGHV2 family.In our study, IGHV2 showed absence of CK, low percentage of bad prognosis mutations (only one case with del11q and no cases with del17p), which could suggest a good prognosis we found in this subgroup.
All cases of IGHV5 family belonged to IGHV5-51 usage.Previous studies suggest that this family should be studied to clarify the inferior prognosis in these patients. 16,38In our study, we found a significant association between IGHV5-51 and U-CLL, and all patients except one were female.It is remarkable the dismal outcome in this subgroup in univariate analysis is the family with the shortest TTFT (hazard ratio 3.08, p ¼ 0.01).Despite the poor prognosis of this subfamily, only 2 out of the 11 patients had high-risk cytogenetic abnormalities.
Finally, similar to other published series, the low representation of the IGHV6 and IGHV7 families does not allow the estimation of better or worse clinical course.
Summarizing, our results point out that belonging to the IGHV2 family could be a good prognostic factor, while the IGHV1 family and some of their specific usages, mainly VH1-69 and VH1-02, might be associated with a dismal outcome.
We also analyzed the cytogenetic abnormalities detected by FISH and karyotyping, and the relations with mutation status of IGHV.A German university study used FISH analysis to demonstrate that about 80% of CLL patients had a least one genomic alteration in all diagnoses, and it was established that patients with a sole del13q and þ12 had a better OS than patients with del17p or del11q.0][41] In the group of patients with þ12 as the only cytogenetic aberration, patients with U-CLL or M-CLL did not show any significant differences in TTFT and OS between both groups based on the IGHV mutational status, as previously reported. 39Regarding the poor cytogenetic risk (del11q or del17p) and IGHV mutational status, we did not find differences, probably due to the low representation.
Recent studies have shown that current FISH analysis, according to Dohner's hierarchical model, underestimates the true genetic complexity revealed by chromosome banding analysis. 42In fact, 22 to 36% of CLL cases with "normal" FISH carry chromosomal aberration at karyotype.In particular, CK, defined by the presence of at least three chromosome lesions in the same clone, can be detected in 14 to 34% of CLL cases and is emerging as a new negative prognostic biomarker associated with an adverse outcome and worse response to CIT as well as to novel agents. 6As in these studies, we also analyzed the CK.In our study, CK cases were relatively rare, representing 15% of the patients, according to other published studies 6,26,43,44 with a significantly higher proportion of U-CLL (68%).In addition, we observed that the combination allows to identify patients with M-CLL who are characterized by a more indolent disease and with TTFT longer than U-CLL, similar to the results obtained by the Italian group. 45The results found in this work of the correlations of the IGHV mutational status, the cytogenetic alterations by FISH and the CK reflect the need for additional clinical studies with a larger number of patients, generally in the context of randomized clinical trials.
It is important to consider that the guidelines from the iwCLL recommend testing for IGHV gene mutation status at baseline in all patients diagnosed with CLL. 46,47In addition, FISH analysis should be performed before any line of treatment of CLL patients. 47Moreover, karyotyping could be introduced in the next future as a recommended test before the onset of therapy in CLL.In fact, FISH, karyotyping, and IGHV mutational status are probably the most powerful and validated clinical prognostic biomarker used in our daily practice. 41,42,48his study has several limitations: the retrospective nature of the study and the impact of an inherent referral bias on our results.Even though we would have to analyze our results based on the origin of the patients, it was not feasible due to the size of our series and the scarce information about the individual ethnic origin of the patients thought the vast majority of patients were of Caucasian origin.In some occasions, the number of cases and the relatively small sample size of some groups did not reach the level required to perform statistically significant analysis.Among other additional limitations are those related to missing information about stereotypes of IGHV and its absence in this analysis of main mutations of genes related with CLL.Finally, the patients of this study were treated almost exclusively with CIT (93%) and this could be biasing the data about survival.Current recommended treatment is not CIT but molecularly targeted drugs.No TP53 mutation data are available.
Conclusion
In conclusion, the interactions between IGHV gene usage, mutation status, FISH, and CK may help provide more precise information about the prognosis of patients diagnosed with CLL and its clinical course.Further real-world studies similar to those described here are needed in the context of treatment with new oral small molecules and new anti-CD20 monoclonal antibodies.
Fig. 3
Fig.3Distribution of IGHV families between patients with CK (percentages of the total number of cases with complex karyotype).CK, complex karyotype; IGHV, immunoglobulin heavy chain variable.
Fig. 4 (
Fig. 4 (A) Time to first treatment in CLL patients with mutated and unmutated IGHV.(B) Overall survival in CLL patients with mutated and unmutated IGHV gene.CLL, chronic lymphocytic leukemia; IGHV, immunoglobulin heavy chain variable.
Table 1
Baseline characteristics of patients Abbreviations: FISH, fluorescence in situ hybridization; NP, not performed; NR, not reported.
Table 3
Relationship between mutational status IGHV and genomic aberrations by FISH Abbreviations: FISH, fluorescence in situ hybridization; HR, hazard ratio; NC, not calculated; OS, overall survival; TTFT, time to first treatment.Global Medical Genetics Vol.11 No. 1/2024 © 2024.The Author(s). | 2024-02-14T05:09:21.673Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "b02a9d775a01533a35c81d911f5e6f42a9676d3c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b02a9d775a01533a35c81d911f5e6f42a9676d3c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228990151 | pes2o/s2orc | v3-fos-license | Rhodium(iii)-catalyzed C–H annulation of 2-acetyl-1-arylhydrazines with sulfoxonium ylides: synthesis of 2-arylindoles
An efficient Rh(iii)-catalyzed synthesis of 2-arylindole derivatives via intermolecular C–H annulation of arylhydrazines with sulfoxonium ylides was accomplished. A variety of 2-acetyl-1-arylhydrazines with sulfoxonium ylides were converted into 2-arylindoles in satisfactory yields. Excellent selectivity and good functional group tolerance of this transformation were also observed.
Introduction
Indoles represent one of the most abundant heterocycles in natural products, biologically active molecules, pharmacological compounds, and materials. 1 Particularly, 2-arylindole and its derivatives are core structural frameworks in numerous drug molecules (Scheme 1). 2 Traditional strategies to access 2-arylindoles include Fisher, 3 Larock,4 Buchwald, 5 and Hegadus indole synthesis. 6 However, the above methods usually suffer from harsh reaction conditions, multistep synthesis, limited substrate scope, and undesirable toxic waste was inevitable in some transformations. Therefore, developing more convenient, efficient and sustainable methods to access 2-arylindole derivatives is highly desirable.
Over the past decades, transition-metal-catalyzed directed C-H activation has been developed as a powerful and straightforward synthetic approach to heterocycles. 7 Moreover, efficient synthesis of indole derivatives using this strategy has also been greatly employed. 8 In recent years, sulfoxonium ylides, featuring operational safety and synthetic convenience as popular carbene surrogates, 9 were used as important building blocks in transition-metal-catalyzed C-H annulation reactions with nucleophilic directing groups for synthesis of indole derivatives. In 2019, Huang and co-workers realized efficient synthesis of 2-arylindole via Ru(II)-catalyzed tandem annulation of N-aryl-2-aminopyridines and sulfoxonium ylides (Scheme 2a). 10 In the same year, Liu's group reported [Ru(p-cymene)Cl 2 ] 2 catalyzed imidamides C-H activation and coupling with sulfoxonium
Results and discussion
The C-H annulation reaction of 2-acetyl-1-phenylhydrazine (1a) with a-benzoyl sulfoxonium ylide (2a) was used as a model to optimize the reaction conditions (Table 1). Catalyst systems were rst screened in the presence of NaOAc, which was used as the additive, in 1,2-dichloroethane (DCE) at 100 C under a nitrogen atmosphere. The desired product 3aa was obtained with 35% yield by using [Cp*RhCl 2 ] 2 /AgSbF 6 as the catalyst system (entry 1). The transformation does not occur in the presence of other catalysts such as [Cp*Co(CO)I 2 ] 2 and [RuCl 2 (pcymene)] 2 (entry 2 and 3). The effect of silver salts was investigated (entry 1 vs. entry 4). AgNTf 2 gave the desired product 3aa in 62% yield. The solvents were subsequently screened using [Cp*RhCl 2 ] 2 /AgNTf 2 as the catalyst system. Among the solvents examined [1,2-dichloroethane (DCE), 1,4-dioxane (dioxane), toluene, and methanol (MeOH)], DCE was the best solvent (entry 4 vs. entries 5-7). Among the various additives tested, NaOAc/HOAc showed the highest efficiency for the reaction (entries 8-11). Further enhancement of the yield (93%) of 3aa was achieved by increasing the loading of HOAc (entry 12). No reaction was observed when the model reaction was conducted in the absence of AgNTf 2 (entry 13). Therefore, the optimal reaction conditions were identied as follows: 2.5 mol% of [Cp*RhCl 2 ] 2 with AgNTf 2 (10 mol%) as the catalyst system, NaOAc (50 mol%) with HOAc (1.0 equiv.) as the additives in DCE at 100 C under a nitrogen atmosphere for 12 h.
To further explore the practicability of our methodology, the Rh(III)-catalyzed annulation was scaled up to the gram scale (Scheme 5). The product 3aa was also isolated in excellent yield (91%) even with a reduced catalyst loading.
For insight into the mechanism of this reaction, control experiments were performed (Scheme 6). The deuterium kinetic isotope effect was investigated by conducting an intermolecular competition reaction between 1a and 1a-d 5 . The 3.3 : 1 ratio of 3aa to 3aa-d 4 demonstrated that the cleavage of the aromatic C-H bond is probably involved in the turnover-limiting step (eqn (1)). A 2.4 : 1 ratio of 3ca to 3pa was observed in the intermolecular competition reaction between 1c and 1p (eqn (2)), indicating that C-H activation probably occurs through an electrophilic aromatic substitution (S E Ar) process.
Based on previous reports 11-13 and our experimental outcomes, a plausible mechanism is shown in Scheme 7. First, the active cationic rhodium catalyst species [Cp*RhX 2 ] is formed through the reaction of the precatalyst [Cp*RhCl 2 ] 2 with AgNTf 2 or NaOAc. The coordination of 1a to rhodium catalyst species and subsequent ortho C-H bond activation generates cationic ve-membered rhodacyclic intermediate A with the release of HX (X ¼ NTf 2 or OAc
Conclusions
In summary, we have developed a Rh(III)-catalyzed synthesis of 2-arylindoles from easily available arylhydrazines and sulfoxonium ylides under mild conditions. The protocol is useful to prepare various 2-arylindoles because of its high atom economy, broad substrate scope, and simple procedure. The synthesis could be easily scaled up to gram scale even with a reduced catalyst loading.
Conflicts of interest
There are no conicts to declare. | 2020-11-05T09:07:44.704Z | 2020-10-27T00:00:00.000 | {
"year": 2020,
"sha1": "4963586e74b17d984b3112e44356b69685c6565f",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra07701a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96c43f6846721b392a0f44cda03e707f64233f8e",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
125820702 | pes2o/s2orc | v3-fos-license | Test of Understanding Graphs in Calculus: Test of Students’ Interpretation of Calculus Graphs
Studies show that students, within the context of mathematics and science, have difficulties understanding the concepts of the derivative as the slope and the concept of the antiderivative as the area under the curve. In this article, we present the Test of Understanding Graphs in Calculus (TUG-C), an assessment tool that will help to evaluate students’ understanding of these two concepts by a graphical representation. Data from 144 students of introductory courses of physics and mathematics at a university was collected and analyzed. To evaluate the reliability and discriminatory power of this test, we used statistical techniques for individual items and the test as a whole, and proved that the test’s results are satisfactory within the standard requirements. We present the design process in this paper and the test in the appendix. We discuss the findings of our research, students’ understanding of the relations between these two concepts, using this new multiple-choice test. Finally, we outline specific recommendations. The analysis and recommendations can be used by mathematics or science education researchers, and by teachers that teach these concepts.
INTRODUCTION
The comprehension of various concepts used in science requires students to have an adequate understanding of a function, its first derivative and its second derivative in their graphical representations.For example, a complete comprehension of the kinematics concepts requires students to have an adequate understanding of the graphs of position (function), velocity (first derivative) and acceleration (second derivative).It is important for students to be able to understand, in the context of kinematics, the concept of the derivative as the slope in the relationships between position and velocity, and between velocity and acceleration.Similarly, students should be able to understand, in the context of calculus, the concept of the derivative as the slope in the relationships between a function and the first derivative (f(x) to f'(x)), and between the derivative and the second derivative (f'(x) to f''(x)).In the same way, it would be important for students to be able to understand, in the context of kinematics, the concept of the antiderivative as the area under the curve in the relationships between the acceleration and the change velocity and between the velocity and the change in position.Correspondingly, in the context of calculus, the concept of the antiderivative as the area under the curve in the relationships between the second derivative and the change of the first derivative (f''(x) to Δ f'(x)) and between the first derivative and the change of the function (f'(x) to Δ f(x)).In this article, we study university students' understanding of these two concepts (slope and area under the curve) in the context of calculus, using a new multiple-choice test.Tests with this feature are highly valued in the area of mathematics and science education research since they allow the evaluation of conceptual learning of large populations (Redish, 1999;Gurel, Eryilmaz & McDermott, 2015).
Many researchers have analyzed students' understanding of the concepts of slope and area under the curve in the context of science, specifically in physics (McDermott et al., 1987;Beichner, 1994;Woolnough, 2000;Meltzer, 2004;Pollock, Thompson & Mountcastle, 2007;Nguyen and Rebello, 2011), while others have studied this understanding in the context of mathematics (Orton, 1983;Leinhardt et al. 1990;Hadjidemetriou & Williams, 2002;Bajracharya et al. 2012;Christensen & Thompson, 2012;Epstein, 2013).However, to date, no study has presented a multiple-choice test that evaluates students' understanding of these concepts in the context of calculus, with a design that follows the steps recommended by mathematics and science education researchers (Beichner, 1994;Ding et al. 2006, Engelhardt, 2009).
To address this need, we conducted a research study with four objectives: (1) to present a multiple-choice test that evaluates students' graph understanding of the concepts of the derivative and the antiderivative (as the slope of the tangent line to the curve at a certain point and as the area under the curve for a given subinterval, respectively) in the context of calculus and its design process; (2) to show that it is a content-valid and reliable evaluation instrument with satisfactory discriminatory power according to the analysis recommended by science education researchers (Beichner, 1994;Ding et al. 2006, Engelhardt, 2009); (3) to conduct a detailed analysis of students' understanding of the concepts evaluated in the test; and (4) to outline specific recommendations, based on the previous analysis, for the instruction of these concepts.It is important to mention that in previous short articles (Pérez, Domínguez & Zavala, 2010;Perez-Goytia, Dominguez & Zavala, 2010), we have presented results of preliminary versions of the test.
PREVIOUS RESEARCH
This section is divided into three subsections.In the first and second sections, we present the most important findings of the studies that have analyzed students' understanding of the concept of the derivative as the slope and the concept of the antiderivative as the area under the curve, respectively.In the third section, we describe the tests designed to evaluate these concepts in the context of mathematics, discussing the differences between those tests and our own.The two first subsections are related to the incorrect options that we established in our test, and the third subsection presents a detailed justification for the need of our study and our test.
In this subsection, we focus on the two studies that present an overall classification of students' difficulties with the understanding of the concept of the derivative as the slope (Leinhardt et al. 1990;Beichner, 1994).Leinhardt et al. (1990) classified students' difficulties into three categories: (1) interval/point confusions, in which students focus on a single point instead of on an interval; (2) slope/height confusions, in which students confuse the height of the graph with the slope; and (3) iconic confusions, in which students incorrectly interpret graphs as pictures.Beichner (1994) designed the "Test of Understanding Graphs in Kinematics (TUG-K)" and applied it to 895 high school and college students.He pointed out the most frequent errors that students make regarding understanding the slope concept and notes that these errors are directly related to the three categories, classified by Leinhardt et al.For instance, regarding the first category of Leinhardt et al., Beichner found that students often compute the slope at a point by simply dividing a single ordinate value by a single abscissa value, essentially forcing the line through the origin.
Students' Understanding of the Concept of the Antiderivative as the Area under the Curve
Several studies have analyzed students' understanding of the concept of the antiderivative as the area under the curve in the context of physics: the majority of them use the context of kinematics (McDermott et al. 1987;Beichner 1994;Planinic, Ivanjek & Susac, 2013), although some of them use other contexts (Meltzer, 2004;Pollock, Thompson & Mountcastle, 2007;Nguyen and Rebello, 2011;Planinic, Ivanjek & Susac, 2013).In addition, several studies analyze this understanding in the context of mathematics (Orton, 1983;Bajracharya et al. 2012;Planinic, Ivanjek & Susac, 2013).Beichner (1994) presents an overall analysis of students' difficulties with the understanding of the concept of the antiderivative as the area under the curve and classifies them into three categories: (1) not recognizing the meaning of areas under the graph, (2) calculating the slope rather than the area, and (3) area/height confusions in which students confuse the height of the graph in the last point of the interval with the area.It is noteworthy that Nguyen and Rebello (2011) found that, when presented with several graphs, students had difficulties in selecting the graph in which the area under the graph corresponded to a given integral, although all of them could state, "the integral equals to the area under the curve."
Related Tests
Our test evaluates students' graph understanding of the concept of the derivative as the slope and the concept of the antiderivative as the area under the curve in the context of calculus, each concept in two different steps using the function, the first derivative, and the second derivative.In the literature, there are two tests previously designed that relate to the present study.The first is the "Calculus Concept Inventory (CCI)" designed by Epstein (2013).The second is the mathematics version of a test designed by Planinic, Ivanjek & Susac (2013).We will briefly describe these tests and identify the differences between those and our test.
The "Calculus Concept Inventory (CCI)" designed by Epstein (2013) is a 22-item multiple-choice test of conceptual understanding of the most basic principles of differential calculus.The test has three dimensions: (1) functions, (2) derivatives, and (3) limits, ratios, and the continuum.Although the test is for calculus students, this inventory does not focus on evaluating students' understanding of the concept of the derivative as the slope and the concept of the antiderivative as the area under the curve.
The mathematics version of the test, designed by Planinic, Ivanjek & Susac (2013), evaluates students' understanding of graphs and focuses on the same concepts as our test.This test has eight questions: five of them refer to the concept of the slope and three to the concept of the area under the graph.However, there are three major differences between this test and ours.The first difference is that not all the questions designed by Planinic et al. are multiple-choice questions: only four of them have this format, and the other four are open-ended questions.We believe that instruments in which there are open-ended questions are important for research; however, our goal is to obtain an instrument that not only can be used for research but can also be used to assess large student populations and be as easy to analyze as many other multiple-choice instruments available in the literature (i.e., Epstein, 2013).The second difference is related to the objective of the study and the design of the mathematics version of the test.Their study focuses on comparing the graphical understanding of the slope and the area under the curve in mathematics with two other contexts.The third difference, and the most important one, is that the context of Planinic et al.'s test is mathematics, while our test belongs to the context of calculus specifically.Planinic et al. use the context of mathematics and ask directly to find "the slope in a point" or "the area under a curve in an interval" in graphs plotted in the x and y axes.In contrast, we use the context of calculus in our study, and we ask to find the derivative of a function at a point or the change of the antiderivative of a function in an interval.As mentioned before, we evaluate these concepts in two steps using the function, the first derivative and the second derivative.This assessment in two steps is not possible in the context of mathematics used by Planinic et al.We believe that the differences between our test and the two related published tests justify the need for our study and our test.
METHODS AND TEST DEVELOPMENT
In this section, we cover the first objective of this study: to present a multiple-choice test that evaluates students' graph understanding of the concepts of derivative and antiderivative (as the slope and as the area under the curve, respectively) in the context of calculus, and its design process.
Test Development
We decided to base our new test on the "Test of Understanding of Graphs in Kinematics (TUG-K)" by Beichner (1994), since it is a content-valid and reliable evaluation instrument with satisfactory discriminatory power widely used in the area of science education (see, for example: Chanpichai & Wattanakasiwich, 2010;Bektasli & White, 2012;Tejada Torres & Alarcon, 2012;Maries & Singh, 2013;Mesic, Dervic, Gazibegovic-Busuladzic, Salibasic, & Erceg, 2015;Hill & Sharma, 2015), and also in our modified version of the TUG-K (Zavala et al., 2017).The original version has been a well-received assessment.However, when analyzing this test, we detected several potential improvements, mainly regarding the parallelism between related objectives and the parallelism between the items of some objectives, but also, the representation of the most common alternative conceptions as distractors.To generate those improvements, we decided to modify the test, adding new items and modifying some distractors in some of the original items that remained.That process is described in another study (Zavala et al., 2017).Note that the original version of the TUG-K has 21 items and our modified version of the TUG-K has 26 items.The general idea to design the test presented in this article was to rewrite the items of the TUG-K, removing the context of kinematics and replacing it with the context of calculus.Figure 1 shows an example of this translation.
To create the test described in this article we designed two preliminary versions of the tests and the final version of the test, which we present here.To design the first preliminary version, we rewrote the 26 items of our modified version of the TUG-K removing the context of kinematics and replacing it with the context of calculus.This version was reviewed by physics and mathematics professors, and special care was put into preserving the original structure of the items.This version was administered to university students of introductory courses in physics and mathematics.The results of this administration (Pérez, Domínguez & Zavala, 2010) showed that, while most of the problems of the test had an almost perfect "translation" from kinematics to calculus, there were some items that lost their meaning or were too difficult for the students to answer.Those items corresponded to objectives 6 and 7 of the TUG-K, which focused on the relationship between a kinematics graph and a textual description.Based on this analysis, it was decided that the second preliminary version of the test would have only 16 items from the remaining objectives of the original test.The results of this second version, which was our pilot study, were analyzed briefly in a previous short article ( Perez-Goytia, Dominguez & Zavala, 2010).In that work, we proved that the 16 items in the context of calculus behaved satisfactorily.That is, the results indicated that the TUG-C had potential to become an appropriate instrument to measure conceptual understanding and graphical interpretation of a function and its derivative.
After this last analysis, we decided to design the final version of the test with the same 16 items, adding some modifications to improve the parallelism of the items.As we will see in the next section, there are several items in the test that are directly related to each other.In this version, we performed several modifications with the distractors and graphs of some items so that the items directly related to each other had the same type of distractors and graphs.This allows us to make direct comparisons between these items (as we will do in the analysis section).In the Appendix of this article, we show this last version of the test, which is referred to as the "Test of Understanding Graphs in Calculus (TUG-C)."Note that the order of items in this version is different from the previous versions, since we decided to establish a random item order.
Characteristics of the Test
Table 1 shows a description of the TUG-C (the complete test can be found in the Appendix).The table presents a description of the five dimensions of the test, the items included in each dimension, the concept evaluated (the derivative as the slope, the antiderivative as the area under the curve, or either of them) and the specific step evaluated.Moreover, Table 2 shows a detailed description of the test's 16 items grouped in each of the five dimensions.As shown in Table 1, the first four dimensions contain three items, and the fifth dimension contains four items.Dimensions 1 & 2 of both tests are directly related, since both evaluate the understanding of the concept of the derivative as the slope, and dimensions 3 & 4 are also directly related, since both evaluate the understanding of the concept of the antiderivative as the area under the curve.The difference in these related dimensions lies in which step is evaluated.Dimension 1 evaluates the step from f(x) to f'(x), while dimension 2 evaluates the step from f'(x) to f''(x).On the other hand, dimension 3 evaluates the step from f'(x) to Δf(x), while dimension 4 evaluates the step from f''(x) to Δf'(x).
Table 2 shows that the related dimensions (dimensions 1 & 2 and dimensions 3 & 4) have related items that evaluate the same concept in the same way, with the only difference being the step evaluated.For example, item 1 of dimension 1 evaluates the determination of the positive value of f'(x) from the graph of f(x), while the related item 7 of dimension 2 evaluates the determination of the positive value of f''(x) from the graph of f'(x).The three items of the related dimensions 1 & 2 ask: (1) to determine the positive value of a derivative, (2) to determine the negative value of the derivative, and (3) to identify the interval in which the derivative is the most negative.On the other hand, the three items of the related dimensions 3 & 4 ask: (1) to establish the procedure to determine the The derivative as the slope The antiderivative as the area under the curve 5 Determine the corresponding graph from a graph 16, 9, 2, 4 Either of the two concepts can be used The four steps are evaluated in each of the items: change of an antiderivative, (2) to determine the value of the change of an antiderivative, and (3) to identify the variable whose antiderivative has the greatest change in a specific interval.Furthermore, in an overview of the test it is possible to observe relations between the three items of dimensions 1 & 2 and the three items of dimensions 3 & 4. The first two items of each of the dimensions focus on obtaining a value of a variable, and the third item focuses on finding a maximum of this variable.
As shown in Tables 1 and 2, dimension 5 evaluates selecting, among different graphs, the correct graph according to the relationships that each item requests.The items in this dimension evaluate each of the steps evaluated in the other four dimensions: (1) from f(x) to f'(x); (2) from f'(x) to f''(x); (3) from f'(x) to f(x); and (4) from f''(x) to f'(x).Dimension 5 also has related items that evaluate the same concept in the same way, with the only difference being the step evaluated.Items 16 and 9 evaluate selecting the corresponding graph of the derivative from a graph, while items 2 and 4 evaluate selecting the corresponding graph of the antiderivative from a graph.The main difference, and the reason for it being a dimension in itself, is that dimension 5 is a process from understanding relationships from graph to graph.Dimensions 1-4 are processes from understanding the relationships from an operation of the graph (the slope of the area under the curve).In summary, the eight related items in the test are: 1 and 7, 6 and 11, 13 and 3 in the related dimensions 1 & 2; 5 and 12, 14 and 8, 15 and 10 in the related dimensions 3 & 4; and 16 and 9, 2 and 4 in dimension 5.
Participants
The research was conducted at a large private university in Mexico.The participants in this study were engineering students finishing their introductory calculus-based mechanics course and their first calculus course.The textbook used in the mechanics course was "Physics for Scientists and Engineers" by Serway and Jewett (2008).Students also used the "Tutorials in Introductory Physics" by McDermott, Shaffer, and the Physics Education Research Group (2001).The textbooks used in the calculus course were by Salinas et al. (2000;2012).This course covers the following main topics: linear function, qualitative analysis of a function and its first and second derivative, quadratic function and Euler's method (interpretation of the area under the curve), analysis of the characteristics, the derivative and applications of different models (polynomial, exponential, sine), and basic integral with a change of variables.The test was administered as a diagnostic test to 144 students who were completing the courses mentioned above, and it did not count towards the final course grades.
ANALYSIS OF THE TEST
In this section, we cover the second objective of this study: to show that the TUG-C is a content-valid and reliable evaluation instrument with adequate discriminatory power according to the analysis recommended by mathematics and science education researchers (Beichner, 1994;Ding et al. 2006, Engelhardt, 2009).We divide this section into two subsections: (1) content validity, and (2) reliability and discriminatory power.
Content Validity
We checked the content validity of the items of the TUG-C.Content validity measures how well the test items cover the content domain they intend to test (Engelhardt, 2009).In evaluating the TUG-C, we asked eight experts (four mathematics faculty members and four physics faculty members) to rate each item with its corresponding objective (1 being the lowest and 5 the highest), in accordance with the procedure established by Engelhardt (2009).Each of the items on the TUG-C was rated with a high score regarding the match between the test item itself and its stated objective.The lowest average score for any item was 4.25 and the highest was 4.88.Moreover, the overall average score was 4.76.These results are evidence of the high content validity of the TUG-C.
Reliability and Discriminatory Power
We also evaluated the reliability and discriminatory power of the TUG-C, performing the five statistical tests suggested by Ding et al. (2006).The first three measures focus on individual test items: the item difficulty index, the item discrimination index, and the item point-biserial.Table 3 shows these values for each item on the TUG-C.The other two measures focus on the test as a whole: the Kuder-Richardson reliability test and Ferguson's delta test.We discuss the results of these five statistical tests below.
Item difficulty index
The item difficulty index (P) is a measure of the difficulty of a single test question.A widely-adopted criterion, used by Ding et al. (2006), indicates that the difficulty index should be between 0.3 and 0.9.Table 3 shows the difficulty index P values for each item on the TUG-C.Only two items, items 10 (0.28) and 15 (0.26), have item difficulty indexes slightly lower than desired.Ding et al. also recommended the calculation of the average difficulty value.The criterion range for the average difficulty value is also [0.3-0.9].For the TUG-C, the average difficulty value is 0.49, which also falls within the suggested range.
Item discrimination index
The item discriminatory index (D) is a measure of the discriminatory power of each item on a test.Ding et al. (2006) established two criteria for this index: (1) eliminate items with negative indexes, and (2) the majority of the test items should have a good discrimination index (D≥0.3).Table 3 shows the discrimination index D values for each item of the TUG-C (using the 25%-25% method).We observe that the TUG-C satisfies these two criteria, since there are no negative items, and all of the items have a discrimination index over 0.3.Ding et al. also recommended the calculation of the average discrimination index, suggesting a value of ≥0.3.For the TUG-C the average discriminatory value is 0.64 (using the 25%-25% method), which meets this criterion.
Point-biserial coefficient
The point-biserial coefficient (rpbs) is a measure of the consistency of a single item in relation to the whole test, reflecting the correlation between students' scores on an individual item and their scores on the entire test.A widely-adopted criterion, followed by Ding et al. (2006), is that an item with a satisfactory point-biserial coefficient must be rpbs ≥0.2.Table 3 shows the point-biserial coefficient for each item on the TUG-C.We can see that all of the TUG-C's items satisfy this condition.Ding et al. also recommended the calculation of the average point-biserial coefficient, with a criterion range of ≥0.2.The average coefficient of the TUV is 0.51, which also satisfies this criterion.
Kuder-Richardson reliability index and Ferguson's delta test
The Kuder-Richardson reliability index is a measure of the self-consistency of a whole test.Ding et al. (2006) state that a test with a reliability index that is higher or equal to 0.7 is reliable for group measures.The index for the TUG-C is 0.81, which meets this criterion.Ferguson's delta test measures the discriminatory power of an entire test by investigating how broadly the total scores of a sample are distributed over the possible range.A widely-adopted criterion, followed by Ding et al., is that a test with a Ferguson's delta of higher than 0.9 offers a good discrimination.Ferguson's delta test for the TUG-C is 0.99, which satisfies this requirement.
Summary of the five statistical tests
We present a summary of the five statistical tests in Table 4. From the analysis, we can conclude that the TUG-C is a reliable test with satisfactory discriminatory power.
ANALYSIS OF STUDENTS' UNDERSTANDING OF THE CONCEPTS OF DERIVATIVE AND ANTIDERIVATIVE
In this section, we cover the third objective of this study: to conduct a detailed analysis of students' understanding of the concepts evaluated by the TUG-C.Specifically, we studied the results of 144 students who had completed their introductory calculus-based mechanics course and their first calculus course.
Overall Performance
The average of the scores of the TUG-C, from the sample of 144 students, is 7.88 of 16 possible points (each test item is worth 1 point).This average, expressed in percentage of the total possible points, is 49%, which corresponds to the average difficulty index value (0.49) shown in the previous section.The distribution of scores was significantly non-normal (Kolmogorov-Smirnov, D (144) = 0.093, p<0.01;Shapiro-Wilk test, W (144) = 0.965, p<0.01).The skewness of the distribution of scores is 0.152 (SE=0.202),indicating a pile-up to the right, and the kurtosis of the distribution is -0.991 (SE=0.401),indicating a flatter than normal distribution.The positive skew indicates that the test is difficult for the students.For this type of distribution, it is more useful to use quartiles as measures of spread.The median of the distribution is 8, the bottom quartile (Q1) is 4.25, and the top quartile (Q3) is 11, so the interquartile range is 6.75.In this overall analysis, it is noteworthy that the students at the median (8) had difficulty answering correctly eight questions (out of 16) on the TUG-C.
The overall results show that this is not an easy test.Students struggle with questions they may not familiar with.However, the concepts included in the tests are taught in their courses, but probably not in the same way, the test presents them.That the test's statistical tests are satisfactory means that students answer the questions engaged and with interest, even if the questions are not presented in the way they are used to.
Performance on Three Representative Items of the Test
In this subsection, we conduct a qualitative analysis regarding students' performance on three representative items of the test: 1, 14 and 16 (see Table 5 and the Appendix).As shown in Table 1, item 1 evaluates the concept of the derivative as the slope, item 14 assesses the concept of the antiderivative as the area under the curve, and item 16 evaluates the use of either of the two concepts to determine the corresponding graph from a specific graph.Figure 2 presents item 1 that evaluates the concept of the derivative as the slope asking to determine the positive value of f'(x) at a point from the graph of f(x).Only 37% of students select the correct option C. The most frequent error is to obtain this value dividing the ordinate by the abscissa of the point on the graph that is not valid in this situation (option D, 26%).Moreover, two other incorrect options are selected in similar proportions, above 10%: selecting the ordinate of the point (option E, 15%), and calculating the value of a "slope" by counting squares (option A, 12%). Figure 3 presents item 14 that evaluates the concept of the antiderivative as the area under the curve asking to determine the value of the change Δf(x) in an interval from the graph of f'(x).In this item, 49% of students select B, the correct option.The most frequent error is to use the correct procedure to calculate the slope in the interval instead of the area under the curve (option D, 24%).The other three incorrect options are selected in similar proportions.In one of them, students select the ordinate value of the point on the right in the interval, x = 4 (option C, 13%).In another, students use an incorrect procedure to calculate the slope of the curve in the interval, dividing the abscissa by the ordinate of the point on the right in the interval (option E, 7%).On the other, students multiply the abscissa by the ordinate of the point on the right in the interval (option A, 7%).It is interesting to notice that the latter multiplication is part of the correct procedure to calculate the area under the curve, but students do not divide this multiplication by two.
Figure 4 presents item 16 that evaluates the determination of the corresponding graph of f'(x) from the graph of f(x).In this item, 49% of students select D, the correct option.In the most frequent error students seem to understand the shape the graph should have but have difficulties relating the relative values of the slopes of the graph, opting for a relationship opposite to the correct one (option B, 27%).In the second most frequent error, students make a mistake only in the section of the graph in which the derivative is zero.Instead of setting a steptype graph with a value of zero in this interval, students choose option C (14%), in which the derivative value decreases uniformly in that interval.Finally, in the third most frequent error (option E, 7%), students only make a mistake with the sign of the value of the slope in the last section of the graph.
Items and Dimensions
Table 6 shows the proportion of students selecting the correct choice of the related items, the proportion of students selecting the correct choice in both of the related items, and the average of the correct choice of each dimension.
From Table 6, we can note three issues regarding these results.The first is that the five dimensions have very close average values, ranging from 44% to 53%.The second is that the value of these averages is relatively low, around 50%.These results show that students have similar difficulties with the concepts evaluated in the test.Moreover, the third is that individual results for the items range from 26% for item 15 to 66% for item 12.It shows that the concepts evaluated in all items of the test are difficult for students, since in the item with the highest percentage (item 12), a third of the students showed difficulties to answer the question correctly.
In the following subsection, we present two analyses.In the first, a comparison of the related items of the test; and in the second, we cluster the items of the test according to levels of difficulty.
Related items in the test
The related items evaluate the same concept in the same way, with the only difference being the step evaluated.Therefore, it is relevant for instructional reasons to perform a comparison of the correct answers to these related items.When we qualitatively compare the proportion of students answering the related items correctly, we observe that they are very similar.Moreover, comparing students' correct answers in these related items using the chi-square test following the procedure described by Sheskin (2007), we found no significant differences in choosing the correct answer in any of the related items.When we observe that there is no significant difference in the selection of the correct answer in the related items, we could think that there is consistency in students' answers, that is, the majority of students who correctly answer the item that evaluates the first step, also correctly answer the item that evaluates the second step.However, when we perform a cross-analysis showing the proportion of students answering both related items correctly (see Table 6), we observe that in several related items this proportion is considerably lower than that for each of the items.Therefore, a considerable number of students correctly answer one of the items but incorrectly answer the other.
For example, for items 1 and 7 of dimensions 1 and 2 (37% and 43% of students answering correctly, respectively) we notice that 24% of the total students answered both items correctly, that is, 65% of students answering item 1 correctly, answer item 7 correctly.We also notice that 13% of students answered item 1 correctly, which evaluates the first step, but answered item 7 incorrectly, which evaluates the second step, and that 19% of students answered item 7 correctly but answered item 1 incorrectly.
For items 5 and 12 of dimensions 3 and 4 (58% and 66% of students answering correctly, respectively), we observe that 48% of students answered both items correctly, that is, 83% of students who answered item 5 correctly, answered item 12 correctly.We also observe that 10% of students answered item 5 correctly, which evaluates the first step, but answered item 12 incorrectly, which evaluates the second step, and that 18% of students answered item 12 correctly but answered item 5 incorrectly.
Finally, for items 16 and 9 of dimension 5 (49% and 61% respectively), we observe that 35% of students answered both items correctly, that is, 71% of students who answered item 16 correctly, also answered item 9 correctly.We also observe that 14% of students answered item 16 correctly, which evaluates the first step, but answered item 9 incorrectly, which evaluates the second step, and that 26% of students answered item 9 correctly but answered item 16 incorrectly.
We can hypothesize that only when a student answers the two related items correctly, he or she may have a complete understanding of the concept.From Table 6, we observe that the proportion of students having a complete understanding of the concept is quite low and range from 21% for items 15 and 10 to 48% for items 13 and 3, and 5 and 12.There is a considerable proportion of students showing only a partial understanding of the concept since they answer one related item correctly but the other incorrectly.
Cluster of items according to difficulty level
According to Table 6, the most difficult items are those from dimension 3 & 4, which evaluate the identification of the variable whose antiderivative has the greatest change in a specific interval.Only 26% of students answered item 15 correctly, which evaluates the first step (from f'(x) to f(x)) and only 28% of students answered item 10, which evaluates the second step (from f''(x) to f'(x)).These two items have in common that they assess the maximum value of an antiderivative in an interval.
These two questions ask students to choose a graph that has the greatest change of a function (the function for item 15 and the first derivative of the function for item 10) given the graph of the derivative (the first derivative for item 15 and the second derivative for item 10).As we have seen in other items, some students confuse the concept of the slope with the concept of the area under the curve.Therefore, these two questions have some items that might be attractive to those students since in option D for item 15 and option D for item 10 the slope changes continuously.Other students might be confused by the word change by thinking about the change of the function in the graphs.In that case, options C and E for item 15 and options A and C for item 10 change; moreover, in option B for item 15 and option E for item 10, the function changes more since it goes from zero to the maximum and then back to zero.What is not attractive for all those students is option A for item 15 and option B for item 10, which are the correct answers, since in those options neither the slope of the function nor the function change in the interval.These items could represent good discriminatory items for those who understand the concept of the antiderivative as the area under the curve.Table 3 shows the item discrimination index, which is a measure of the discriminatory power of each item on a test.Item 15 is considerably above average (0.75 vs average = 0.64) and item 10 is slightly below average (0.61).This index is the discriminatory power concerning the test as a whole; it would probably be better for item 10 if we take only the items that correspond to the concept of the antiderivative.On the other hand, the two items have an above average point-biserial coefficient, which is a measure of the correlation between students' scores on an individual item and their scores on the entire test (0.63 for item 15 and 0.56 for item 10 vs average = 0.51).Actually, item 15 is the second highest in the table.
Table 6 also shows the easiest items, which are two groups of related items.The first group is from the items of dimensions 3 & 4 (items 5 and 12 respectively, which evaluate the account of the procedure to determine the change of an antiderivative.Item 5, which evaluates the first step (from f'(x) to Δf(x)), was answered correctly by 58% of students, and 66% of students answered correctly item 12, which evaluates the second step (from f''(x) to Δf'(x)).The second group of related items comes from dimensions 1 & 2 (items 13 and 3 respectively), which evaluate the identification of the interval in which the derivative is the most negative.Item 13, which evaluates the first step (from f(x) to f'(x)), was answered correctly by 61% of students, and 53% of students answered correctly item 3, which evaluates the second step (from f'(x) to f''(x)).The items of these groups have in common that to solve them it is not necessary to make accurate calculations.
Items 5 and 12 (dimensions 3 & 4, respectively) correspond to items in which students have to choose, among different descriptions, the one that represents the concept of the antiderivative as the area under the curve.Items 14 and 8 are items that evaluate the same concept, but in these two cases, students are asked to calculate the change instead of choosing a procedure.The results of students in these two items are considerably better than those of items 5 and 12.It seems that the correct answer to items 5 and 12 attract not only those students able to do the procedure without saying what the procedure is but also, those students who, while presenting the question, would not be able to do it by themselves.
Finally, from Table 6, we observe that the other five groups of related items have a medium difficulty level.These groups of items evaluate the determination of the positive and negative value of the derivative (two groups, dimensions 1 & 2), the determination of the change of the antiderivative (one group, dimensions 3 & 4), and the determination of the corresponding graph of the derivative or the antiderivative from a graph (two groups, dimension 5).The items from these five groups have in common that, to solve them, it is necessary to make accurate calculations, unlike the items of the groups that were the easiest and the most difficult for students.This type of calculations are necessary in all of the items of dimension 5 while choosing the correct graphs, since in all items there are incorrect graphs very similar to the correct choice but with slight differences (e.g., the incorrect option B in item 16), and students need to do quality calculations to choose the correct answer.
Most Frequent Errors
In this subsection, we present an overall analysis of the most frequent errors in the items (a) from the related dimensions 1 & 2, (b) from the related dimensions 3 & 4, and (c) from dimension 5. Table 5 shows the five dimensions evaluated in the TUG-C, the items' descriptions, the results for each option of the items.Note that the percentages of the correct answers correspond to the difficulty indices shown in Table 3.
Items of the related dimensions 1 & 2
The items of these dimensions evaluate students' understanding of the concept of the derivative as the slope.Dimension 1 evaluates the determination of f'(x) from the graph of f(x), and dimension 2 evaluates the determination of f''(x) from the graph of f'(x).
Dimensions 1 & 2 have two items that evaluate the determination of a positive and a negative value of a derivative at a point of a curve (dimension 1: items 1 and 6; dimension 2: items 7 and 11).Table 5 shows that, for all the items, the most frequent error is obtaining this value by dividing the ordinate by the abscissa of the point in the graph (item 1: option D, 26%; item 6: option D, 24%; item 7: option C: 25%; item 11: option A, 15%).It is important to note that in the items in which the derivative is negative (items 6 and 11), students add a negative sign to the obtained value.An interesting tendency is that the proportion of students answering correctly is higher for items with a negative derivative than it is for items with a positive derivative.This error is also rather common in the context of kinematics (Beichner, 1994) but in that case, the misunderstanding comes from the conception that velocity (or acceleration) is distance divided by time (velocity divided by time).In this case, the error could come from the way in which students are interpreting the derivative, df/dx, which could be, as in kinematics, a ratio of two quantities, the function and x.
Dimensions 1 & 2 have a third item which evaluates the identification of the interval in which the derivative is the most negative.Dimension 1 evaluates the identification of the interval in which f'(x) is the most negative in the graph of f(x) (item 13), and dimension 2 evaluates the identification of the interval in which f''(x) is the most negative in the graph of f'(x) (item 3).In these two items, we observe two frequent errors: the choosing of an interval in which the derivative is negative but not the most negative (item 13: option D, 11%; item 3: option B, 22%), and the choosing of the point in which the graph has a minimum value (item 13: option A, 13%; item 3: option C, 10%).
An interesting result is that these two errors, an interval in which the derivative is negative and has the most negative value, are connected.Some students choose the former because, in that interval, not only the slope is negative, but the function also becomes negative.The last point of the interval is the most negative value of the function in the graph for both items.
Items of the related dimensions 3 & 4
The items of these dimensions evaluate students' understanding of the concept of the antiderivative as the area under the curve.Dimension 3 evaluates the determination of Δf(x) from the graph of f'(x), and dimension 4 evaluates the determination of Δf'(x) from the graph of f''(x).
Dimensions 3 & 4 have an item that evaluates the account of the procedure to determine the change of an antiderivative in an interval from a graph (dimension 3: item 5; dimension 4: item 12).Note that the slope of the curves is constant in the interval.As shown in Table 5, the most frequent error in these two items is to account for the procedure to calculate the slope of the curve instead of the area under the curve (item 5: option C, 33%; item 12: option B, 18%).
These dimensions also have an item that evaluates the determination of the value of the change of an antiderivative.Item 14 evaluates the determination of the value of Δf(x) from the graph of f'(x) (dimension 3), and item 8 evaluates the determination of the value of Δf'(x) from the graph of f''(x) (dimension 4).We observe a pattern: the sum of the percentages of the two answers in which students use correct or incorrect procedures to calculate the slope of the curve, instead of the area under the curve, are similar in the two items, and these two answers are the most frequent errors in both items.The first incorrect choice is to use the correct procedure to calculate the slope in the interval instead of the area under the curve (item 14, option D: 24%; item 8, option B: 11%).The second incorrect choice is to use an incorrect procedure to calculate the slope of the curve in the interval (item 14, option E: 7%; item 8, option A: 15%).The sum of these percentages is 31% for item 14 and 26% for item 8.The difference between these sums is minimal (only 5%), and these two choices together are the most frequent errors in both items.
It seems that the most important challenge for instruction is that the concept of the antiderivative as the area under the curve is misunderstood for that of the derivative as the slope.For students, and probably more commonly for first-year students, the slope is the concept they learn.Thus, they resort to it even in cases in which it does not apply.
Dimensions 3 & 4 have a third item that evaluates the identification of the variable whose antiderivative has the greatest change in a specific interval (dimension 3, item 15; dimension 4, item 10).In these two items, we found the same two most frequent errors.In the first most frequent error, students do not choose the graph of the curve with the greatest area under the curve in the interval, but rather the graph of a curve whose slopes in the interval are always increasing (item 15 option D, 31%; item 10: option D, 31%).In these items, students seem to be thinking in terms of the slope as in the previous items of these dimensions.This item asks for a variable with the greatest change in the interval and students choose the curve that has the greatest change in positive slopes.The second most frequent error in these items is choosing a graph of a curve that increases in the middle of the interval and decreases in the other half (item 15: option B, 29%; item 10: option E, 28%).(The curve is like an inverted parabola.The left point of the interval is (0, 0) and the right point of the interval is (constant, 0); therefore, the vertical change of the curve in the interval is zero).
In these errors, students seem to be thinking in terms of two resources, which are different ways of thinking about a situation (Hammer, 2000).The first resource is to think in terms of the slope: this item asks for a variable with the greatest change in the interval, and students choose the curve that has the greatest change in slope as it begins in a high positive value and ends at the same value, but negative.The second resource is to think in terms of the vertical value.Although the vertical change is zero in the curve of the graph, students seem to think in terms of the vertical value, and conclude that this curve has the greatest vertical change since it "rises and falls" (as interviews with some students have revealed).
Items of dimension 5
The four items of dimension 5 evaluate the determination of the corresponding graph from a graph.Students can solve these items using either of the two concepts evaluated in the first four dimensions: the concept of the derivative as the slope and/or the concept of the antiderivative as the area under the curve.
As shown in Table 6, two items of this dimension evaluate the determination of the corresponding derivative graph from a graph (items 16 & 9).Item 16 evaluates the determination of the corresponding f'(x) graph from the f(x) graph, and item 9 evaluates the determination of the corresponding f''(x) graph from the f'(x) graph.The two most frequent errors in these two related items are very similar (see Table 5).In item 16, the most frequent error corresponds to a graph in which students seem to understand the shape the graph should have, but have difficulties relating the values of the slopes of the graph, opting for a relationship opposite to the correct one (option B, 27%).In item 9, the most frequent error corresponds to a graph in which students also seem to understand the shape the graph should have, but have difficulties relating the values of the slopes of the graph, choosing absolute values for the slopes (option C: 19%; see item 9 in the Appendix).In these two items, the second most frequent errors are the same (item 16: option C, 14%; item 9: option B, 13%).In these choices, students make mistakes only in the sections of the graph in which the derivative is zero.Instead of setting a step type graph with a value of zero in this section, students choose the option in which the constant values are connected by a straight line.
Interestingly, the differences in the first most common error in both items appear to be due to slight differences in the graph shown.The first section of the graph of item 9 goes from negative values to positive values, which is different from what happens in the first section of item 16, which has only positive values.This subtle difference between the graphs (which is not important to the expert) seems to have a certain effect on the errors that are triggered in students.This is consistent with studies of science education that mention that superficial features of problems are very important for novices (Leonard, Gerace & Dufresne, 1999).
There are two other items of dimension 5, which evaluate the determination of the corresponding graph of the antiderivative from a graph (items 2 & 4).Item 2 evaluates the determination of the corresponding f(x) graph from the f'(x) graph, and item 4 evaluates the determination of the corresponding f'(x) graph from the f''(x) graph.These two related items have the same most frequent error (item 2: option D, 24%; item 4: option E, 28%).In this choice, students seem to understand the shape of the antiderivative graph, but have difficulties relating the absolute values of the slopes of the sections with slopes different from zero, opting for a relationship opposite to the correct one (see items 2 & 4 in the Appendix).
RECOMMENDATIONS FOR INSTRUCTION
This section addresses the fourth objective of the study: to establish recommendations for instruction of the concepts of derivative and antiderivative based on the results obtained from the TUG-C.McDermott (2001) suggests that every curricular change should originate from the research on students' understanding.The previous analysis of student performance presented in this article is part of such research on students' understanding of the concept of the derivative as the slope and the concept of the antiderivative as the area under the curve in the context of calculus.Also, it allows us to establish specific recommendations for instruction on these Next, we summarize the most important findings derived from our analysis of the students' performance, and then we make some recommendations for instruction.
Since the distribution of the students' scores in the test shows a positive skew, we can state that the test presents numerous challenges for students.We noticed that students who are at the median of the distribution (8) had difficulty answering correctly 8 out of 16 items on the test.Since the topics covered on the test are concepts that the students should have learned in early mathematics and science courses at the university level, this result shows the need to modify instruction in order to increase students' conceptual understanding of the concepts of the derivative and the antiderivative.
Moreover, we observe that the value of the average of correct answers for every dimension is relatively low, tending to 50%, and that students have similar difficulties in these five dimensions.This shows that the need to modify the instruction should be done in the instruction of the skills and concepts evaluated in all of the five dimensions of the test.
Interestingly, in our analysis we found no significant differences when choosing the correct answer in any of the related items of the test.From this we notice: (a) that students' performance in the items of the dimension 1, that evaluate the determination of f'(x) from the graph of f(x), is similar to students' performance in the items of the dimension 2, that evaluate the determination of f''(x) from the graph of f'(x); (b) that students' performance in items of the dimension 3, that evaluate the determination of Δf(x) from the graph of f'(x), is similar to the students' performance in the items of dimension 4, that evaluate the determination of Δf'(x) from the graph of f''(x); (c) that students' performance in the items that evaluate the determination of the corresponding graph of the derivative from a graph in the two steps evaluated in the test is similar, and (d) that students' performance in the items that evaluate the determination of the corresponding graph of the antiderivative from a graph in the two steps evaluated in the test is also similar.These results could be positive, since we can infer that students are learning no matter whether we talk about the function, its first derivative or the second derivative.This could mean that students have a level of understanding of the relationships of derivatives and antiderivatives no matter the derivative order, which is encouraging.However, taking into account the low performance of students in the test we can also think that they are similarly lacking in understanding of the relationships in calculus, particularly in graphs.Also, if we take into account that only a student who answered the two related items correctly, has a complete understanding of the concept, from Table 6, we observe that the proportion of students having a complete understanding of the concept is quite low and range from 21% for items 15 & 10 to 48% for items 13 & 3 and 5 & 12.There are a considerable proportion of students showing only a partial understanding of the concept, since they answer one related item correctly but the other incorrectly.
According to the classification of items by difficulty level, the most difficult items for students are the items of dimensions 3 & 4 that evaluate the identification of the variable whose antiderivative has the greatest change in a specific interval.These items have in common that they assess the maximum value of the antiderivative in an interval.Therefore, a general instructional recommendation is to specifically focus on teaching the skills to solve this type of items.McDermott (2001) proposes also that persistent conceptual errors must be explicitly addressed during instruction.We identified the most frequent error for the related items of the test.Mathematics and Science teachers can use this catalog of errors when planning their instruction for the concepts of the derivative and the antiderivative.Moreover, analyzing the most frequent errors identified in the previous section, we noticed that there are four errors that have a percentage of selection higher than 20% in both related items.The first error is to obtain the value of a derivative at a point of a curve (that is positive derivative) by dividing the abscissa by the ordinate of the point in the graph (item 1: option D, 26%; item 7: option C: 25%).The second and third errors are in the two items that evaluate the identification of the variable whose antiderivative has the greatest change in a specific interval.In these two items, we found that students don't choose the graph of the curve with the greatest area under it, but a graph of a curve whose slopes in the interval are always increasing (item 15: option D, 31%; item 10: option D, 31%), and a graph of a curve that increases in the middle of the interval and decreases in the other half (item 15: option B, 29%; item 10: option E, 28%).Finally, the fourth error can be found in the items that evaluate the determination of the corresponding graph of the antiderivative from a graph.In this error, students seem to understand the general shape of the antiderivative graph, but have difficulties relating the absolute values of the slopes of the sections with slopes different from zero, choosing a relationship opposite to the correct one (item 2: option D, 24%; item 4: option E, 28%).We recommend that mathematics and science teachers focus on these errors in particular when planning their instruction.The instructional materials that help to teach these topics should include sections in which students reflect on their own learning to realize that the concept and procedure of calculating a first derivative from a function are the same as that of calculating a second derivative from the first derivative of the function.The materials should also include sections in which students reflect to realize that the concept and procedure of calculating a change in a function from the graph of the first derivative are the same as calculating the change of the first derivative from the graph of the second derivative.
CONCLUSION
In this article, we first presented the "Test of Understanding Graphs in Calculus (TUG-C)" and its design process.Later, we showed that the TUG-C is a content-valid and reliable evaluation instrument with satisfactory discriminatory power according to the analysis recommended by mathematics and science education researchers (Beichner, 1994;Ding et al. 2006, Engelhardt, 2009).Then, we conducted a detailed analysis of students' understanding of the concept of the derivative as the slope and the concept of the antiderivative as the area under the curve evaluated in the TUG-C.Finally, we outlined specific recommendations, based on the previous analysis, for the instruction of these concepts.This article has two main implications.The first is that the test presented in the Appendix can be used by mathematics or science education researchers, and by teachers covering these concepts.It is important to note that the TUG-C is the first test for evaluating students' understanding of the concept of the derivative as the slope and the concept of the antiderivative as the area under the curve in the context of calculus that satisfies all the criteria recommended by mathematics and science education researchers.The test could be used to analyze students' understanding of these concepts in different institutions, to investigate students' learning performance, and to test the effectiveness of new instructional material based on research designed to increase student knowledge and understanding (Hake, 1998;Redish, 1999).The second implication is that the instructional recommendations established in this article could also be taken into account by researchers and teachers, and could guide the design of new instructional material intended to increase students' understanding of these concepts.
Figure 1 .
Figure 1.Example of the translation of item 11 of our "Test of Understanding Graphs in Calculus (TUG-C)"
7 . 5 . 12 .
Description of the items of the Test of Understanding Graphs in Calculus (TUG-C) Determine the positive value of f'(x) from the graph of f(x) 6. Determine the negative value of f'(x) from the graph of f(x) 13.Identify the interval in which f'(x) is the most negative in the graph of f(x) 2 Determine the positive value of f''(x) from the graph of f'(x)11.Determine the negative value of f''(x) from the graph of f'(x) 3. Identify the interval in which f''(x) is the most negative in the graph of f'(x)3 Establish the procedure to determine the Δf(x) from the graph of f'(x) 14.Determine the value of the change the Δf(x) from the graph of f'(x) 15.Identify the f(x) with the greatest change from several graphs of f'(x)4 Establish the procedure to determine the Δf'(x) from the graph of f''(x) 8. Determine the value of the change of the Δf'(x) from the graph of f''(x) 10.Identify the f'(x) with the greatest change from several graphs of f''(x) 5 16.Determine the corresponding graph of f'(x) from the graph of f(x) 9. Determine the corresponding graph of f''(x) from the graph of f'(x) 2. Determine the corresponding graph of f(x) from the graph of f'(x) 4. Determine the corresponding graph of f'(x) from the graph of f''(x)
Figure 2 .
Figure 2. Item 1 of our "Test of Understanding Graphs in Calculus (TUG-C)"
Table 1 .
Description of the Test of Understanding Graphs in Calculus (TUG-C)
Table 3 .
Item difficulty index (P), item discriminatory index (D), and point-biserial coefficient (rpbs) for each item of
Table 4 .
Ding et al. (2006)ults of the five statistical tests suggested byDing et al. (2006)for the TUG-C
Table 5 .
The five dimensions evaluated in the TUG-C, the description of the items, and the percentages selecting a particular choice for each item.(Note that the correct answer is in boldface.) 5 16.Determine the corresponding graph of f'(x) from the graph of f(x) 3% 27% 14% 49% 7% 9. Determine the corresponding graph of f''(x) from the graph of f'(x) 3% 13% 19% 61% 5% 2. Determine the corresponding graph of f(x) from the graph of f'(x) 9% 52% 2% 24% 12% 4. Determine the corresponding graph of f'(x) from the graph of f''(x)
Table 6 .
Correct answer percentages of the related items, correct answer percentages of students selecting the correct choice in both of the related items, and correct answer averages percentages of each dimension | 2018-12-05T16:13:56.308Z | 2017-09-29T00:00:00.000 | {
"year": 2017,
"sha1": "345384dfec24b8adb23b8cba8b0a64120d3f40e2",
"oa_license": "CCBY",
"oa_url": "http://www.ejmste.com/pdf-78085-14447?filename=Test%20of%20Understanding.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0401c99755afb6f6648f86bde4fb8fcd83e82780",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236381346 | pes2o/s2orc | v3-fos-license | A new enigmatic lineage of Dascillidae (Coleoptera: Elateriformia) from Eocene Baltic amber described using X-ray microtomography, with notes on Karumiinae morphology and classification
Dascillidae are a species-poor beetle group with a scarce fossil record. Here, we describe Baltodascillus serraticornis gen. et sp. nov. based on a well-preserved specimen from Eocene Baltic amber. It differs from all known Dascillidae by its reduced mandibles. After studying the specimen using light microscopy and X-ray microtomography, we tentatively place this genus in the poorly defined subfamily Karumiinae based on the large eyes, serrate antennae, and lack of prosternal process. This is the first representative of the Dascillidae formally described from Baltic amber and the first described fossil member of the subfamily Karumiinae. We briefly discuss the problematic higher classification of Dascillidae, along with the morphology and biogeography of the group.
Introduction
Dascillidae are a small beetle family with about 100 extant species classified in 11 genera distributed in all zoogeographic realms (Ivie and Barclay, 2011;Jin et al., 2013b;Lawrence, 2016;Johnston and Gimmel, 2020). Together with the still-smaller family Rhipiceridae, they form the superfamily Dascilloidea within the series Elateriformia (Lawrence, 2016;Kundrata et al., 2017). Dascillidae are further divided into two subfamilies with unclear limits, i.e., Dascillinae (six genera, about 80 species) and Karumiinae (five genera, about 20 species) (Lawrence and Newton, 1995;Ivie and Barclay, 2011;Lawrence, 2016). There has been no worldwide taxonomic revision of Dascillidae. However, the subfamily Dascillinae has received particular attention during the last decade. The worldwide genera and Asian species of Dascillinae were revised by Jin et al. (2013b), with additional species described in several subsequent papers (Jin et al., 2015(Jin et al., , 2016Li et al., 2017;Fang et al., 2020;Wang et al., 2020a, b). The only Australian genus of the family, Notodascillus Carter, 1935, was revised by Jin et al. (2013a), and Terzani et al. (2017) revised the Western Palearctic species of Dascillus Latreille, 1797. Most recently, Johnston and Gimmel (2020) reviewed the North American Dascillidae of both subfamilies and also provided a species checklist for the New World. Although Karumiinae were included in the latter study, issues of higher classification did not receive much attention. Ivie and Barclay (2011) revised the status of certain genera associated with Karumia Escalera, 1913, but a comprehensive revision of Karumiinae in the modern sense is missing.
The fossil record of Dascillidae is rather scarce and was critically reviewed by Jin et al. (2013c). Two Australian Upper Triassic genera, Apheloodes Dunstan, 1923 andLeioodes Dunstan, 1923, described based on a single elytron each, were removed from Dascillidae and placed in Coleoptera incertae sedis, and the North American Miocene Protacnaeus Wickham, 1914 andMiocyphon Wickham, 1914 were transferred to Psephenidae and Scirtidae, respectively. The monotypic genus Mesodascilla Martynov, 1926 from the Jurassic Karatau locality in Kazakhstan, which was originally assigned to Dascillidae (Martynov, 1926), has been re-cently treated either as a member of Eulichadidae (Kirejtshuk and Azar, 2013) or Lasiosynidae (Yan et al., 2014). Jin et al. (2013c) kept only three genera which contain fossil species in Dascillidae. The genus Lyprodascillus Zhang, 1989, with two described species from the Miocene of China, has remained tentatively in Dascillidae, although authors could not confirm its systematic placement based on the available descriptions and illustrations (Jin et al., 2013c). The only Mesozoic dascillid genus is the monotypic Cretodascillus Jin, Slipinski, Pang & Ren, 2013 from the Cretaceous Yixian Formation in China (Jin et al., 2013c). Most fossil species are classified in the extant genus Dascillus, which currently contains six species from the Miocene Shandong Formation of China (Zhang, 1989;Zhang et al., 1994), one from the Miocene Latah Formation of the USA (Lewis, 1973), and one from the Eocene Florissant Formation of the USA (Wickham, 1911). However, at least some of these species probably do not belong to Dascillidae and are kept there only tentatively until the type material is examined in detail (Jin et al., 2013c). Some fossil taxa attributed to Dascillidae, and usually to Dascillus (sometimes as its synonym Atopa Paykull, 1799), remain unnamed, as for example the specimens found in the Oligocene Aix-en-Provence in southern France (Hope, 1847) or specimens reported from Baltic amber (Helm, 1886(Helm, , 1896Klebs, 1910;Larsson, 1978;Spahr, 1981).
In this study, we describe an enigmatic new genus based on a well-preserved specimen from Eocene Baltic amber. This is the first described representative of Dascillidae from amber. We discuss its systematic placement as well as the higher classification and biogeography of Dascillidae.
Material and methods
The amber piece was polished by hand, allowing improved views of the included specimen, and was not subjected to any additional treatment. For the purpose of light microscopic image capture, the amber specimen was fixed at a suitable angle of view to a Petri dish with gray plasticine modeling clay (Pelikan, Germany, model number 601492). It was photographed submersed in glycerol to prevent reflections and to reduce visibility of small scratches on the surface of the amber piece. Images were taken with a Leica MC 190 HD camera attached to a motorized Leica M205 C stereo microscope equipped with the flexible dome Leica LED5000 HDI or the conventional ring light Leica LED5000 RL-80/40 as an illuminator, applying the software Leica Application Suite X (version 3.7.2.22383, Leica Microsystems, Switzerland). Stacks of photographs were combined with the software Helicon Focus Pro (version 7.6.4, Kharkiv, Ukraine), applying the rendering method "depth map" or "weighted average".
The X-ray micro-CT (µCT) observations were conducted at Daugavpils University, Daugavpils, Latvia (DU), using a Zeiss Xradia 510 Versa system. In order to achieve the best results possible, three scans were conducted: habitus, head/prothorax and abdomen. For prothorax/head and abdomen scans, the parameters were mostly identical with the exception of exposure time. For both scans, sample-detector and source-sample distances were set to 37.4 and 38 mm, respectively, the source X-ray beam energy was set to 30 kV and power of 2 W. Tomographic slices were generated from 2401 slices through 360-degree rotation, using 4× objective. Achieved voxel size during scan was 3.3 µm. Exposure times were set to 8 s for the abdomen scan and 9 s for the head/prothorax scan. The overall scan was carried out at Xray beam energy of 40 kV and power of 3 W, source-sample distance was set to 24.4 mm and sample-detector distance was set to 117.7 mm. Tomographic slices were generated from 1601 slices through 360-degree rotation using a 0.4× objective and exposure set to 17 s. Achieved voxel size during overall scan was 11.7 µm. All three scans had binning set to 2 times as well as variable exposure time set to 2 at the thickest part of the sample. Prior to each scan a warmup scan was conducted that lasted 25 min. Acquired images were imported into Dragonfly PRO (version 2020.2) software platform for interactive segmentation and 3D visualization. Final image plates were assembled using Adobe Photoshop CC (version 2019-20.0.5).
Body length of the examined specimen was measured from the clypeus to apex of elytra, body width at the widest part of the body, pronotal length at midline, and pronotal width at the widest part. Morphological terminology follows Jin et al. (2013b) and Johnston and Gimmel (2020). The holotype is deposited in the collection of the Department of Palaeontology of the National Museum, Prague, Czech Republic (NMPC). The ZooBank LSID number for this publication is urn:lsid:zoobank.org:pub:F55CA75B-AF7C-4F1F-BD65-DCBEAAC75410 (12 March 2021).
Etymology
Derived from the words "Baltic" (referring to Baltic amber) and "Dascillus" (a genus name in Dascillidae). Gender: masculine.
Diagnosis
Baltodascillus gen. nov. can be recognized among other genera of Dascillidae by the reduced mandibles (Figs. 2a,3d). Additionally, the following combination of characters serves to distinguish it from all other genera: strongly serrate antennae, a fusiform terminal maxillary palpomere, large eyes, pronotum widest posteriorly and with a strongly developed lateral carina, lack of a developed prosternal process, complete elytra, confused elytral punctation, a weakly developed elytral epipleuron, and abdomen with five ventrites (Figs. 1-4).
Composition and distribution
Baltodascillus gen. nov. is a monotypic genus and is known exclusively from Eocene Baltic amber.
Etymology
The specific epithet "serraticornis" is a Latin adjective referring to the shape of the antennae.
Diagnosis
As for the genus (vide supra).
Description
Adult male. Body (Fig. 1) about 7.5 mm long and 2.5 mm wide, narrowly elongate, about 3 times as long as wide, weakly convex; dorsally moderately densely setose.
Remark
We conclude that the examined specimen is a male based on the shape of the body, serrate antennae, a long, narrowly rounded abdominal ventrite 5, and aedeagal structure (parameres) detected in X-ray micro-CT imaging.
Discussion
In order to understand the placement of the newly discovered fossil genus within Dascillidae, it is important to acknowledge the currently problematic situation with the subfamilial classification of the group. As currently delimited, Dascillidae contain two vaguely defined subfamilies with a divergent taxonomic history (Lawrence and Newton, 1995;Lawrence, 2016). Free-living and non-modified Dascillinae occur in non-arid areas of western North America, the Greater Antilles, Palearctic and Oriental regions, western Africa, and Australia. Karumiinae, which contain morphologically modified lineages of which at least some are associated with subterranean termites, are mostly distributed in arid and semiarid regions of western North America, northern Africa, central Asia and southern South America (Lawrence and Newton, 1995;Jin et al., 2013b;Lawrence, 2016). The latter group had long been treated as a separate family, Karumiidae, associated with the families of former Cantharoidea (i.e., roughly the soft-bodied Elateroidea) and originally included only the morphologically highly modified groups near Karumia Escalera, 1913 with an apparently soft cuticle, variously reduced elytra and unknown females (Crowson, 1955;Arnett, 1964;Paulus, 1972). Crowson (1971) first associated Karumiidae with Dascillidae based on a number of morphological characters, although he still kept the family status for both groups. He also transferred the genera Anorus LeConte, 1859, Pleolobus Philippi and Philippi, 1864 (Fig. 5), Genecerus Walker, 1871 and Emmita Escalera, 1914 from Dascillidae to Karumiidae and provided updated diagnoses for both groups. Based on his key, the Karumiidae contained taxa with a simple galea, the ligula with two short lobes, the ventral tarsal lobes absent or not basally articulated, the corpotentorium very broad, the male antenna pectinate or flabellate, and females wingless (apparently referring to known females of Anorus sp. with shortened elytra and reduced hind wings). However, Crowson's characters for separating the two subfamilies have been largely problematic since they do not apply to all included genera, and especially certain intermediate groups like Genecerus violate the diagnosis of Karumiinae (Lawrence and Newton, 1995;Lawrence, 2016). For example, both subfamilies share a very broad corpotentorium, the ventral lobes on tarsomeres of Genecerus and Anorus are similar to those in Das- cillinae, at least females of Genecerus have well-developed hind wings (Lawrence, 2016), and serrate to pectinate antennae are known only in Emmita and Genecerus but not in any other karumiine genus. Additionally, Grebennikov and Scholtz (2003) concluded that the larval morphology of Pleolobus is highly similar to that of Dascillinae, with the primary difference being that the latter possess molar ridges on the mandible. Unfortunately, the relationships among the dascillid genera have never been tested using a focused molecular approach. The only available molecular phylogenetic analyses, although either preliminary or focused on broader issues, have nonetheless called into question the current subfamilial classification (Kundrata et al., 2017;Johnston and Gimmel, 2020).
Representatives of Dascillidae display a graded series of morphological modifications connected with possible neoteny and soft-bodiedness, similar to the situation in various Elateroidea (Cicero, 1988;Kundrata and Bocak, 2019). In Dascillidae, the modifications represent a continuum from the well-sclerotized groups with adults of both sexes fully developed (although at least some females might have slightly less developed hind wings than their counterparts) (traditional Dascillinae), through more or less soft-bodied groups with males with complete elytra and females with variously shortened elytra and reduced hind wings (Anorus spp., Pleolobus, probably also Emmita) (Fig. 5) to the soft-bodied forms with males with greatly shortened elytra and females unknown but probably even more dramatically modified, termitiform (most Karumia) (Paulus, 1972;Solervicens, 1991;Ivie and Barclay, 2011;Johnston and Gimmel, 2020). Such modifications also affect the morphology of the thorax, such that the hard-bodied groups have a well-developed, spinelike prosternal process, whereas the soft-bodied forms have the prosternal process dramatically reduced to form a short triangle-shaped denticle. It is probable that the modifications of morphology connected with the independent evolution of soft-bodiedness have influenced the formal classification of Dascillidae, similar to many cases in the Elateroidea (Kundrata and Bocak, 2019). This hypothesis, however, needs to be tested using a well-sampled molecular phylogenetic analysis.
Under the current state of knowledge, we tentatively place the newly described Baltodascillus gen. nov. in Karumiinae based on the large eyes, serrate antennae, and lack of a spine-like prosternal process. This fossil genus is morphologically similar to the New World genera Anorus and Pleolobus (Fig. 5) or the African/Middle Eastern genus Genecerus but differs from these in the strongly serrate (but not pectinate) antennae, crenulate hind margin of the pronotum, and confused elytral punctation. It differs from all known Dascillidae by the highly reduced mandibles which do not overlap and are apparently non-functional. Given the unusual nature of this character, further specimens of this genus will be critical in determining whether this is a stable character or the result of a single teratological specimen. This is the first representative of the Dascillidae formally described from Baltic amber and also the first described fossil member of the subfamily Karumiinae since Semenoviola obliquotruncata Martynov, 1925, originally described in that group, was determined to be a dermapteran (Bolívar y Pieltain, 1926). Also, Baltodascillus gen. nov. is the first karumiine known from Europe and, therefore, sheds light not only on the paleodi- versity and systematics of the group but also on its biogeography.
Data availability. The holotype of Baltodascillus serraticornis gen. et sp. nov. is deposited in the collection of the Department of Palaeontology of the NMPC. Volume renderings of X-ray microtomography of habitus, head/prothorax, hind tarsus, and abdomen are available as Video supplements 1-4, respectively.
Author contributions. RK conceived and designed the study. MLG and RK carried out the morphological investigation. AB conducted the micro-CT scanning. SMB prepared microphotographs. GP and RK prepared figure plates. RK and MLG wrote the initial manuscript with help of GP, AB, and SMB. All authors performed the literature search, discussed the results, and edited, reviewed, and approved the manuscript.
Review statement. This paper was edited by Florian Witzmann and reviewed by Shuhei Yamamoto and one anonymous referee. | 2021-07-27T00:05:57.500Z | 2021-05-21T00:00:00.000 | {
"year": 2021,
"sha1": "e0264ce8446d7680e417b2fdeac7fa7fd654d8dd",
"oa_license": "CCBY",
"oa_url": "https://fr.copernicus.org/articles/24/141/2021/fr-24-141-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b8373569819950a9e92ee9578e167cfa14c31574",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geology"
]
} |
225768466 | pes2o/s2orc | v3-fos-license | Revealing the Core Transcriptome Modulating Plant Growth Phase in Arabidopsis thaliana by RNA Sequencing and Coexpression Analysis of the fhy3 far1 Mutant
Plants must continually calibrate their growth in response to the environment throughout their whole life cycle. Revealing the regularity of plant early growth and development is of great significance to plant genetic modification. It was previously demonstrated that loss of two key light signaling transcription factors, FHY3 and FAR1, can cause a stunted stature in the plant adult stage, and numerous defense response genes can be continuously activated. In this study, we performed a time-course transcriptome analysis of the early 4 weeks of leaf samples from wild plants and their fhy3 and far1 transcription factors. By comparative transcriptome analysis, we found that during the early 4 weeks of plant growth, plants primarily promoted morphogenesis by organizing their microtubules in the second week. In the third week, plants began to trigger largescale defense responses to resist various external stresses. In the fourth week, increased photosynthetic efficiency promoted rapid biomass accumulation. Weighted gene coexpression network analysis of FHY3 and FAR1 revealed that the two light signaling transcription factors may be originally involved in the regulation of genes during embryonic development, and in the later growth stage, they might regulate gene expression of some defense-related genes to balance plant growth and immunity. Remarkably, our yeast two-hybrid and bimolecular fluorescence complementation experiments showed that FAR1 interacts with the immune signaling factor EDS1. Taken together, this study demonstrates the major biological processes occurring during the early 4 weeks of plant growth. The light signaling transcription factors, FHY3 and FAR1, may integrate light signals with immune signals to widely regulate plant growth by directly interacting with EDS1.
To reveal details of the molecular machinery, many molecular analyses have been performed on the different plant stages from seed germination to leaf expansion to flowering. Recently, an emerging approach to monitor these elements in large numbers is the statistical analysis of spatial and temporal transcriptome data, a method that allows one to follow thousands of genes simultaneously. This can yield new insights into the underlying biological mechanisms of plant development.
Light is the most important environmental signal that influences plant growth and development. Higher plants have evolved a wide range of photoreceptors to sense information about the light in their environment. Among these photoreceptors, red light and far red light-absorbing phytochromes (phys) are the best characterized (Neff et al., 2000;Whitelam et al., 1993). Far-red elongated hypocotyl 3 (FHY3) and far-red-impaired response (FAR1) function as positive regulators and initiate phyA signaling by directly activating transcription of the downstream targets FAR-RED ELONGATED HYPOCOTYL1 (FHY1) and FHY1-LIKE (Hudson et al., 1999;Lin et al., 2007;Wang & Deng, 2002). Recent studies have demonstrated that FHY3 and FAR1 play multiple roles in plant growth and development, such as in photomorphogenesis (Wang & Deng, 2002), chloroplast division , chlorophyll biosynthesis (Tang et al., 2012), circadian clock , abscisic acid responses , and plant immunity . These functions indicate that FHY3 and FAR1 are crucial for plant growth and development. Our previous report showed that the adult fhy3 far1 mutant had slow, stunted growth, and displayed severe cell death under a long-day condition; this phenotype became even more severe under a short-day condition . Overexpression of the chlorophyll biosynthesis gene (HEMB1), salicylic acid (SA) metabolism and signal transduction-related genes (NahG, PAD4, SID2, and EDS1), and myo-inositol 1-phosphate synthase (MIPS1) could rescue these phenotypes in the fhy3 far1 mutant Tang et al., 2012;Wang et al., 2016). In addition, chromatin immunoprecipitation-sequencing (ChIP-seq) studies have shown that FHY3 has over 1,000 putative direct targets in Arabidopsis, suggesting that FHY3 might have broader functions in plant growth and development . However, when and why FHY3/FAR1 act as either activators or repressors in various developmental stages are still unknown.
RNA-seq is an effective method to analyze time-course gene expression and to obtain system-wide information about gene transcription and regulation. Recently, a high temporal-resolution investigation of maize seed transcriptomes separated the early endosperm dynamic transcriptome into four distinct groups corresponding to four developmental stages and unraveled the genetic control of early seed development (Yi et al., 2019). Using coexpression analysis, the core conserved stress-responsive genes (CARG) were discovered as involved in the response to multiple abiotic stresses in sesame species (Dossa et al., 2019). Through the weekly transcriptome analysis of Arabidopsis halleri for 2 years, seasonal transcriptomic dynamics were revealed, and a large number of seasonal genetic oscillations were defined. This was the first time molecular studies in the lab were combined with ecological studies in natural environments (Nagano et al., 2019).
Preparation of Plant Material
Plant materials used in this study include the double mutant fhy3 far1 (Lin et al., 2007;Wang & Deng, 2002) and wild type Nossen (NO). To avoid germination inconsistency results from seed material at different maturity levels, all seeds were surface sterilized and sown onto the MS plates containing 0.5% sucrose and 0.8% agar. After incubation for three days at 4 • C, seeds were transferred to the growth room with 60% humidity, and were cultured in a 16/8 hr (light/dark) photoperiod at 22 • C for 7 days. Then, the in-dish-grown 7-day-old seedlings were transferred into soil:vermiculite (3:1) mixture and were maintained under identical growth conditions with regular watering.
Electrolyte Leakage Measurement
An electrolyte leakage assay was performed using a method described by Chen et al. (2013). Leaves were submerged in 5 mL of distilled water for 48 hr. The conductivity of the solutions was measured with a conductivity meter at regular intervals. Three biological replicates were set up for each of the measurement intervals. Statistical analysis was performed using the Student's t test.
Transcriptome Sequencing
For the transcriptome samples, plant samples (fhy3 far1 mutant and NO wild type plants) were collected at four developmental stages: at 1-week-old cotyledons and at 2, 3, 4-week-old rosette leaves. Three independent biological replicates were set up for each genotype at the four given time points. We first randomly selected the 1-week-old seedlings of fhy3 far1 and NO in the culture dish (about 20 seedlings for each sample). Then, the cotyledon sample was collected by manual dissection, snapfrozen in liquid nitrogen, and stored at −80 • C before processing. For the rosette leaf sample, each sample was obtained by pooling leaves from at least three plants. Specifically, for each 2-week-old sample, we randomly selected five plants in good growth status and collected the two largest true leaves from each plant. For the older samples, we selected three plants and then collected one larger true leaf at the same position from each plant.
Total RNA was isolated using the RNA Pure Plant Kit (Tiangen). RNA-seq libraries were constructed according to the manufacturer's protocol of the NEBNext Ultra RNA Library Prep Kit. The library was created using an Illumina HiSeq 2000 sequencing system.
RNA-Seq Data Analysis
Differentially expressed genes were screened by DESeq2 software (log2 fold change ≥ 1). The Blast2GO and ClusterProfiler programs were used to determine gene ontology (GO) and Kyoto Encyclopedia of Gene and Genomes (KEGG) functional enrichment, and q values less than 0.05 were considered as significant enrichment.
RNA Extraction and Quantitative RT-PCR
Plant total RNA was extracted using an RNA extraction kit (Tiangen), and firststrand cDNA was synthesized by reverse transcriptase (Invitrogen). Real-time PCR was performed according to the manufacturer's protocol of the SYBR Premix ExTaq Kit (Takara). All primers used are listed in Table S1. Three biological replicates were performed for each sample, and expression levels were normalized against those of UBQ controls.
Coexpression Analysis
For the weighted gene coexpression gene analysis, genes with a Pearson correlation coefficient (r) greater than 0.9 were considered to be significantly coexpressed genes with FHY3 and FAR1. The coexpression network was built using the Perl script, and data correlation and visualization were performed using the Cytoscape ver. 3.4.10 program (Smoot et al., 2011).
Plasmid Construction
To obtain the open reading frames of FAR1, EDS1, PAD4, and SAG101, the firststrand cDNA was reverse transcribed from total RNA extracted from Col WT seedlings using oligo (dT) 18 primer and high-fidelity FastPfu DNA Polymerase (TransGen). Fragments were cloned into the pEASY-Blunt vector (TransGen), resulting in pEASY-FAR1/EDS1/PAD4/SAG101 constructs, respectively. It should be noted that the translational stop codon in these genes was deleted to facilitate follow-up cloning. The primers are listed in Table S1, and all clones were validated by sequencing. To construct vectors for the yeast two-hybrid assay, pEASY-PAD4 was digested with EcoRI and BamHI, and the PAD4 fragment was inserted into the pLexA vector (Clontech) cut by EcoRI and BamHI to give rise to pLexA-PAD4. The pEASY-EDS1 plasmid was cut with MfeI and XhoI, and pEASY-SAG101 was digested with MfeI and SalI, and the corresponding fragments were ligated into the EcoRI/XhoI sites of pLexA, producing pLexA-EDS1 and pLexA-SAG101, respectively. The yeast vectors pAD-FHY3 and pAD-FAR1 were described previously (Lin et al., 2007). To prepare constructs for the BiFC assay, fragments from pEASY-FAR1 or pEASY-EDS1 cut with XbaI and XhoI were cloned into the pUC-SPYNE vector (Walter et al., 2004) digested with XbaI and XhoI, generating pYFP N -FAR1 and pYFP N -EDS1, respectively. The EDS1 gene was released from pEASY-EDS1 cut with XbaI and XhoI and cloned into the XbaI-XhoI sites of the pUC-SPYCE vector, to generate pYFP C -EDS1. To construct LUC reporter genes driven by the SID2 promoter, a 2-kb fragment upstream of the SID2 ATG translation start code was PCR amplified with the SID2P1 and SID2P2 primers from Col genomic DNA. The PCR fragment was inserted into the pGEM-T Easy (Promega) vector to produce pGEM-SID2p. After sequencing confirmation, the SID2 promoter was released from pGEM-SID2p cut with MfeI and SacI and ligated into the EcoRI-SacI site of YY96 vector (Yamamoto et al., 1998) to produce SID2p:LUC.
Yeast Two-Hybrid Assay
Yeast two-hybrid analysis was performed according to the Yeast protocols handbook (2009). Briefly, the AD fusion plasmids were transformed into the Ym4271 strain, while LexA-fusion plasmids were transformed into EGY48 strain. After mating, transformants were grown on SD/Trp-Ura-His dropout plates containing X-gal for blue color development, from which the relative β-galactosidase activity was quantified.
BiFC Assay
For the BiFC assay in Nicotiana benthamiana leaves, Agrobacterium tumefaciens strain GV3101 containing the described plasmids was grown overnight in LB medium. The cultures were pelleted and resuspended in equal volumes of induction medium (10 mM MgCl 2 , 10 mM MES pH 5.7, 0.2 g L −1 acetosyringone) for 3 hr at 28 • C. The p19 protein of the tomato bushy stunt virus was used to suppress gene silencing (Voinnet et al., 2003). The desired agrobacterium cultures were combined to an OD 600 ratio of 0.7:0.7:1 (YFP N -fused plasmid:YFP C -fused plasmid:p19 silencing plasmid) and were infiltrated into the leaves of 3-week-old N. benthamiana. Fluorescence was visualized in the epidermal cell layers of the leaves 2-3 days after infiltration using a confocal microscope (Olympus).
Luciferase Reporter Assay
For transient expression assays, Agrobacterium strains containing SID2p:LUC reporter plasmids, various effector constructs (Myc-FHY3, Myc-FAR1, or Myc-EDS1), 35S:GUS internal control and p19 silencing plasmids were mixed at a ratio of 0.7:0.7:0.3:1, and these were coinfiltrated into the abaxial surface of N. benthamiana leaves. Three days after infiltration, proteins from N. benthamiana leaves were extracted with the 1× Cell Culture Lysis Reagent (Promega). LUC and GUS activities were quantified as previously described (Tang et al., 2012). The relative reporter gene expression levels were expressed as LUC/GUS ratios.
Loss of FHY3/FAR1 Stunts Plant Growth and Initiates Premature Cell Death
In our previous study, we noticed that the fhy3 far1 mutant plant displayed a stunted stature in the adult stage . To further investigate how FHY3 and FAR1 regulate plant growth, we analyzed the phenotypes of the fhy3 far1 mutant at different developmental stages. Aside from having an elongated hypocotyl, the fhy3 far1 mutant did not differ much from the NO (Nossen) wild type during the first 2 weeks ( Figure 1A). Remarkably, 3-week-old fhy3 far1 grew slowly and had a retarded leaf area increase rate ( Figure 1F). During week 4, the wild type grew rapidly, and its average leaf area quadrupled, reaching 5.9 cm 2 . In comparison, the fhy3 far1 mutant had significantly reduced vegetative growth, and its leaf area was less than onetenth of the wild type. The leaves of the fhy3 far1 mutant visibly developed necrotic lesions. week-old and 4-week-old leaves were measured using ImageJ software. Data are represented as mean ± SD; n = 9. (G) Electrolyte leakage of the double mutant and wild type. Three-week-old leaves were immersed in water, and electrolyte leakage was measured periodically. Asterisks denote statistically significant differences in electrolyte leakage compared with WT (p < 0.01, Student's t test). Similar results were obtained in three independent experiments.
When stained with 3,3 ′ -diaminobenzidine (DAB) and nitroblue tetrazolium (NBT) (which indicates hydrogen peroxide and superoxide accumulation), the 3-week-old leaves of fhy3 far1 were heavily stained, whereas those of the wild type were barely stained ( Figure 1E,F). To determine whether photooxidative damage resulted in cell death in the fhy3 far1 mutant, we analyzed the cell death-induced electrolyte leakage of the 3-week-old leaves. In agreement with the DAB and NBT staining results, electrolyte leakage was significantly greater in fhy3 far1 than in wild type ( Figure 1G). Taken together, our results indicate that FHY3 and FAR1 play key roles in controlling plant growth especially during the early third week.
Weekly Dynamic of Transcriptomes in the Wild Type
In order to explore the possible molecular mechanism of plant growth and development, 24 leaf samples at four different developmental stages were collected and subjected to Illumina paired-end sequencing. After cleaning and filtering out low quality and ambiguous reads, 506 million clean reads containing 151 Gb of valid data were acquired (Table 1). The sequencing data were deposited in the National Center for Biotechnology Information (NCBI) database (accession number: SRP229410). To reveal the vital biological processes of the different growing stages, we first analyzed the weekly transcriptome dynamics of the wild type. Compared with the previous week, there were 1,741 upregulated genes and 3,050 downregulated genes ( Figure 2). A total of 1,353 and 1,741 genes were induced in the third and fourth weeks, respectively, whereas 805 and 1,619 were repressed. Based on the analysis of the numbers of differentially expressed genes (DEGs), we found that the early-stage gene modification was highly dynamic, especially in 2-week-old leaf tissue, and that this transcriptome modulation would play a vital role in the subsequent growth and development of plants.
To gain further functional insights, DEGs were assigned to GO terms such as cellular component, molecular function, and biological process. Comparing GO annotations from genes in 2-week rosette leaves with those of 1-week cotyledons, we found that the 2-week upregulated genes were closely related to the dynamic arrangement of microtubules (MTs), These comparisons included cellular component (kinesin complex and microtubule), molecular function (microtubule motor activity and microtubule binding), and biological process (microtubule-based movement and microtubule cytoskeleton organization) ( Figure 3A). The dynamic behavior of the MTs played a pivotal role in controlling cell growth and shape formation, and MTassociated proteins (MAPs) controlled MT dynamics, stability, and organization (Lloyd & Hussey, 2001;Sedbrook, 2004). The IQ67 DOMAIN (IQD) protein family is known as the largest and most important class of MAPs in plant development and in plant responses to the environment (Bürstenbinder et al., 2007;Liang et al., 2018). Here, we found that IQD8, IQD21, and IQD25 displayed significantly increased expression in the 2-week leaf growth ( Figure 4). JAGGED, a zinc finger transcription factor encoding a gene controlling anisotropic growth, was also induced in the second week stage (Schiessl et al., 2014). Moreover, DNA replication, chromatin binding, and translation categories were significantly overrepresented, and a larger number of nuclear-and ribosome-localized proteins were also enriched among the 2-week upregulated genes. This was consistent with the established Arabidopsis MAP proteome data (Hamada et al., 2013). In the MAP-enriched proteome database, proteins implicated in replication, transcription, and translation were highly enriched. Moreover, proteins involved in RNA transcription-related processes constituted 23.5%, proteins involved in DNA replication accounted for 5.0%, and proteins with roles in translation accounted for 5.7%. The second week is a critical period for plant growth, and in this period, plants initially differentiate the true leaf and conduct early leaf morphogenesis through cell proliferation and cell expansion. A similar transcriptional change was observed in the temporal transcriptome of maize seed development. Consistent with the active nuclear division and cell proliferation that occur at the coenocyte formation stage of the maize seed, functional categories of DNA replication, transcription factor activity, DNA binding, microtubule-based movement, microtubule motor activity, and nucleosome assembly were also overrepresented in its coexpression modules (Yi et al., 2019). Hence, tissue formation of different plant materials may have common mechanisms. In the second week, the wild type (NO) initially conducts early true leaf morphogenesis by activating DNA/RNA-related processes and microtubule arrangement. When examining the downregulated genes, the largest negative DEGs mainly belonged to oxidoreductase activity and plant stress-response which are often accompanied with higher redox activity ( Figure S1). In addition, it should be noted that addition of sucrose in the MS medium would influence gene expression in 1-week-old seedlings. To avoid germination inconsistency resulting from seed materials at different maturity, all seeds were sown onto the MS plates containing 0.5% sucrose and were grown for 7 days. It is known that sucrose not only serves as a carbon skeleton supply but also acts as a signal molecule that regulates a variety of growth and developmental processes in plants. It has been reported that low concentrations (0.5%-1%) of sucrose promotes seed germination, primary root growth, and hypocotyl elongation at the earliest stages of plant growth (Singh et al., 2017). We compared the 1-week upregulated genes with sucrose-responsive genes (Blasing et al., 2005) and found that 87 sucrose-induced genes were upregulated in the 1-week-old cotyledons grown in a dish compared to the 2-week-old leaves grown in soil (Appendix S1). This was consistent with the presentation of the functional category of sucrose metabolic processes among the 2-week downregulated genes ( Figure S1). It is often assumed that growth and defense are negatively correlated. Plants must efficiently allocate their resources between stress-response pathways and growthpromoting pathways to be successful during the developmental stage. Compared with the second week, a larger number of stress process-related genes were positively regulated in the third week ( Figure 3B). These primarily included the following: oxidoreductase activity, iron homeostasis, and response to stress and defense. Enhanced cellular oxidation has an important function on the regulation of plant growth and stress responses (Considine & Foyer, 2014). Iron ions can exist in both the ferric and the ferrous forms and can function as a crucial redox catalyst in many cellular processes such as DNA replication, energy production, and plant immunity (Cassat & Skaar, 2013;Ganz & Nemeth, 2015;Luo et al., 1994). MPK3 encodes a mitogen-activated protein kinase that is an important component of ROS signaling pathways (Mittler et al., 2011). The bHLH transcription factor FIT functions as the central regulator of the Strategy I iron-uptake response (Colangelo & Guerinot, 2004;Jakoby et al., 2004). As shown in Figure 4, the expression levels of MPK3 and FIT were increased in the leaves during the third week. We compared the 3-week upregulated genes with genes responding to pathogen infection (Bartsch et al., 2006). We found that 225 pathogen-induced genes were represented in the upregulated group (Appendix S1). In addition, a total of 17 R genes, which code for proteins that recognize a specific pathogen effector, were induced in the third week stage. Consistently, transcript levels of PR genes (pathogenesis-related gene) including PR1, PR4, and PR5 were also greatly upregulated. Together, these results indicate that the 3-week plant initially activates response signals to modify plant growth in response to changing environmental conditions. Subsequently, we analyzed the DEGs of 4-week leaves by comparing with the third week, and we found that upregulated DEGs were mainly assigned to photosynthesis ( Figure 3C). These genes include those that encode major photosynthetic complexes of the LHCA and LHCB protein families, photosystem I/II subunits, and chlorophyll synthesisrelated key enzymes (HEMA1, CHLH, GUN4, CAO, and PORA). Four of these, i.e., LHCB1.1, HEMA1, CHLH, and PORA, were selected and were confirmed to be induced in the fourth week stage by quantitative RT-PCR (qRT-PCR) analysis ( Figure 4). Furthermore, the categories of oxidoreductase activity, response to auxin, and signal transduction were also significantly overrepresented among the 4-week upregulated genes. Intracellular redox interactions are important for developmental processes. Chloroplasts are powerful generators of redox signals through the core process of photosynthesis. The plant cell requires monitoring of chloroplast status to emit signals that regulate nuclear gene expression in a timely manner. Compared with the ROS transcriptional footprints (Willems et al., 2016), we found that 23 Genomes Uncoupled (GUN) retrograde signaling-related genes were represented in the 4-week upregulated genes, including two chlorophyll biosynthetic genes and 12 photosystem subunit genes (Appendix S1). Plants rely on photosynthesis to convert solar energy, carbon dioxide, and water into chemical energy and biomass. This increased photosynthetic efficiency contributes to rapid biomass accumulation, and this was consistent with the rapidly increased leaf area of the 4-week-old wild type.
Analysis of Differentially Expressed Genes (DEGs) in fhy3 far1 Mutant
To further examine key biological processes in the various developmental stages, we conducted 4 weeks of transcriptome analysis of the growth retardation mutant fhy3 far1. Compared with NO, the number of DEGs in each developmental phase of the fhy3 far1 mutant was significantly increased especially in the first 3 weeks ( Figure 5). In the mutant, there were 2,139 upregulated genes and 3,132 downregulated genes in the 2-week samples, while in the 3-week leaves, 3,475 genes were upregulated and 2,335 ones were downregulated, respectively, compared with the previous week. Out of those upregulated genes, only 231 out of 2,139 (10%) and 385 of 3,475 (11%) showed similar expression patterns with the wild type. These results indicate that FHY3 and FAR1 play an important role in various plant developmental stages, especially during the first 3 weeks, and that the loss of FHY3 and FAR1 causes largescale changes in transcriptome.
After classifying the DEGs under their GO terms, we found that fhy3 far1 had the largest positive difference in DEGs under transmembrane transport activity in the second week, including drug transmembrane transport, ion transport, transferase activity, and channel activity ( Figure S2). We also observed that the oxidoreductase activity also showed high activity in the second week. Together with the eukaryotic ortholog groups (KOG) analysis result, we found that defense responses were pretriggered in the second week ( Figure S3), and those that induced transmembrane transport activity and oxidoreductase activity may partly contribute to the immune response. Comparing the genes responding to pathogen infection, we found that 530 pathogen-induced genes were represented in the 2-week upregulated group of fhy3 far1 (Appendix S1). Unlike the wild type, early morphogenesis-related microtubules activity was also delayed to the third week in the mutant. Compared to gene expression in the second week, 132 MAPs-coding genes were found to be upregulated in the 3-week-old mutant (Appendix S1). It is known that both MT assembly and dynamics are assisted by the coordinated action of MAPs. Comparing DEGs of fhy3 far1 and NO in various development stages also revealed that the number of DEGs in the second week was far greater than in the other stages and that many genes closely related to microtubule activity were indeed differentially expressed in this week ( Figure 6B). These results suggest that FHY3 and FAR1 are two key transcriptional factors that positively regulate MT assembly and properly inhibit the hypersensitive response in the second week to ensure optimal plant growth. Loss of FHY3 and FAR1 led to the gradual appearance of the stunted stature and necrotic lesions in the third week. In addition, we found that the protein kinase activity and protein phosphorylation categories were also significantly overrepresented among the DEGs of fhy3 far1 and NO. Protein phosphorylation is a dominant mechanism of information transfer in cells. Both the MT arrangement process and plant immunity response are accompanied by the phosphorylation of a large number of proteins. In the regulation of MT arrangement, MAPs and other regulators of MT dynamics are modified post-translationally through reversible phosphorylation to reorganize the microtubule cytoskeleton for environmental and developmental changes (Sasabe et al., 2006;Wasteneys & Ambrose, 2009). It is also known that mitogen-activated protein kinases (MAPKs) are important regulators of plant immunity. Compared to wild type, we observed that several MAPK-encoding genes (MPK1, MPK2, MPK3, MPK7, MPK11, and MPK15) and MAPK cascades (MKK4/MKK5-MPK3 and MKK1/MKK2-MPK4) that are involved in plant responses to biotic stress were activated in the 2-week-old mutant. Together, these data indicate that FHY3 and FAR1 play important regulatory functions in the key biological processes in the early developmental stages of the plant, and that the loss of these two transcription factors can cause large-scale changes in the transcriptome and disrupt cellular metabolism, finally resulting in stunted growth and an out-of-control defense response.
Identification of FHY3/FAR1 Coexpressed Genes
To obtain more information about the regulatory function of the transcription factors FHY3/FAR1, coexpression analysis using the Cytoscape software was carried out to determine specific genes that may be associated with the two genes. We identified a total of 40 genes that were coexpressed with FHY3 and 81 genes that were coexpressed with FAR1 during the early 4 weeks of growth (Figure 7). Among those genes, there were 19 genes with coexpression of both FHY3 and FAR1. This indicates that FHY3 mostly works with its homolog FAR1 to exert regulatory function; moreover, we found that FAR1 seems to have a broader regulatory role. Then, we divided the coexpressed genes into different molecular clusters. As shown in Figure 7, red-labeled genes represent the 19 coexpressed genes of both FHY3 and FAR1. Most interestingly, there were seven defense response genes (including two TIR-NBS-LRR R genes), two cell differentiation related genes, three hormonal biosynthesis regulation genes (ABA3, KAO1, and WRKY46), and four oxidoreductase activity genes that were coexpressed with FHY3. Consistent with previous studies, the fhy3 far1 double mutant showed a lesion mimic phenotype, and a number of defense related genes were largely induced .
These results indicate that FHY3 is involved in the defense response, likely by regulating the TIR-NBS-LRR-mediated genes. Another gene that deserves attention is WRKY46, a well conserved WRKY domain transcription factor that plays crucial roles in plant innate immunity as well as in abiotic stress responses (Ding et al., 2014;Götz et al., 2008;Hu et al., 2012). It has been reported that WRKY46 could regulate abscisic acid (ABA) signaling and auxin homeostasis to inhibit lateral root development under osmotic stress conditions (Ding et al., 2015). Similarly, FHY3 could modulate ABA signaling and SA signaling to regulate plant development and plant immunity Wang et al., 2016).
Thus, we speculate that FHY3 may associate with WRKY46 to collectively regulate cellular hormone levels to control the growth and stress response processes. Going back to FAR1, we found that it has nine coexpressed genes which were specifically expressed at the embryonic stage, four defense response genes, and three iron homeostasis-related genes. Thus, we speculate that FAR1 may play an important role in the early development stage, and this may be through the regulation of redox homeostasis and inhibition of the early immune response to promote plant growth. Combined with the phenotype of fhy3 far1 mutant, these data indicate that the two light signaling factors may be associated with the positive regulation of early embryonic development and in the later growth stage may negatively regulate expression of defense-related genes to balance plant growth and immunity.
FAR1 Physically Interacts With EDS1
Our previous study indicated that FHY3 and FAR1 negatively modulate plant immunity by regulating the NB-LRR-mediated SA signaling pathway to the defense response . Enhanced disease susceptibility1 (EDS1) is regarded as a central regulator of plant innate immunity, and it interacts with two sequencerelated proteins, Phytoalexin deficient 4 (PAD4) and Senescence-associated gene 101 (SAG101), and operates upstream of pathogen-induced SA accumulation (Feys et al., 2001(Feys et al., , 2005. To explore how the transcription factors FHY3/FAR1 participate in the regulation of immune signaling, we first focused on three nuclear localization immune signaling factors, EDS1, PAD4, and SAG101, which appear to travel between the nucleus and cytoplasm. The movement of these proteins is important for transcriptional reprogramming in disease resistance (Wiermer et al., 2007;Vlot et al., 2009). We attempted to test whether these proteins could interact with FHY3 and FAR1 in the nucleus. In a yeast two-hybrid assay, we found that a combination of LexA-EDS1 (EDS1 fused to the DNA-binding domain of LexA) and AD-FAR1 (FAR1 fused to the activation domain of B42) strongly activated LacZ reporter gene expression, indicating that EDS1 and FAR1 interact in yeast cells ( Figure 8A,B). We also observed that FHY3 weakly interacted with EDS1; however, no interaction was observed between FHY3/FAR1 and PAD4 or SAG101 ( Figure 8A). To test this notion in vivo, we carried out a bimolecular fluorescence complementation (BiFC) assay in Nicotiana benthamiana leaves via an Agrobacterium-mediated transient expression approach. As positive controls, YFP N -EDS1 (EDS1 fused to the N-terminus of yellow fluorescence protein) and EDS1-YFP C (EDS1 fused to the C-terminus of YFP) interacted and produced fluorescence signals in the nucleus and cytoplasm ( Figure 8D). We observed that coexpression of YFP N -FAR1
Figure 7
Coexpressed network analysis of FHY3 and FAR1. FHY3/FAR1-coexpression module was created using Cytoscape ver. 3.4.10. Red-labeled genes represent coexpressed genes of both FHY3 and FAR1. Edges with correlation values smaller than 0.9 were removed. and EDS1-YFP C resulted in strong fluorescence in the nuclei of N. benthamiana leaves ( Figure 8C), suggesting that FAR1 and EDS1 interact in this subcellular compartment. However, expression of either FAR1 or EDS1 failed to produce YFP fluorescence. Thus, we conclude that FAR1 and EDS1 interact in the nucleus.
To explore the molecular relevance of the FAR1-EDS1 interaction, we constructed a luciferase reporter gene driven by SID2 (SALICYLIC-ACID-INDUCTION DEFICIENT, which encodes a key enzyme in pathogen-induced SA biosynthesis) promoter (SID2p:LUC) and performed a transient expression assay in N. benthamiana leaves with FAR1 and/or EDS1 effectors. FAR1 did not affect the expression of SID2p:LUC, whereas EDS1 strongly induced its expression ( Figure 8D). Most intriguingly, the coexpression of EDS1 with FAR1 markedly reduced the expression promoted by EDS1 alone (Figure 8D), suggesting that FAR1 inhibits the activity of EDS1 on downstream gene expression. To further investigate the effect of FHY3/FAR1 on the molecular function of EDS1, we comparatively analyzed the gene regulatory function ofFHY3/FAR1and EDS1 on the transcriptome level. Through the analysis of the gene expression profiles of 4-week-old fhy3 far1 and eds1, we found that 142 out of 274 (52%) EDS1-induced genes were represented (Bartsch et al., 2006). in the downregulated group of FHY3/FAR1 while 69 (25%) EDS1-induced genes were found to be downregulated in fhy3 far1 ( Figure 8E) (Bartsch et al., 2006). Together, these results indicate that FHY3 and FAR1 are involved in the defense response, likely through the ESD1-mediated immune signaling pathway.
FHY3 and FAR1 Play a Broad Regulatory Role in Plant Phase Growth
As sessile organisms, plants need to adjust themselves in time according to changes in the external environment. The extensive molecular interplay between external and internal signals allows for a plant's developmental regularity. In this study, we revealed three major biological processes during the early 4 weeks of plant growth using a time-course transcriptome analysis: morphogenesis promoted in the second week, large-scale defense responses triggered in the third week, and rapid biomass accumulation supported by increased photosynthesis in the fourth week. RNA-seq analysis revealed that the loss of both FHY3 and FAR1 completely disturbed the three aforementioned biological processes. In the fhy3 far1 mutant, defense responses were pretriggered in the second week, while morphogenesisrelated microtubules activity was delayed to the third week, and photosynthesis was also affected in the fourth week. These indicate that FHY3 and FAR1 play a wide regulatory function on plant growth and development in various developmental stages. Although research on the physiological and molecular mechanisms of FHY3/FAR1 in many biological processes has made great progress, when and how they sense plant endogenous growth signals and integrate various environmental signals to play their growth-dependent regulating functions still require further study.
FHY3/FAR1 Integrate Plant Immune Signaling
EDS1 and PAD4 localize to both the nucleus and cytosol. The dynamic distribution between these two compartments upon pathogen recognition is likely responsible for proper signal relay (Garcia et al., 2010;Wiermer et al., 2007). Two studies have improved our understanding of defense signaling by revealing that EDS1 forms protein complexes with the TIR-NBS-LRR disease resistance proteins RPS4 and RPS6 in the nuclei and activates defense signaling (Bhattacharjee et al., 2011;Heidrich et al., 2011). However, the nuclear actions of EDS1 remain poorly understood. Our finding that FAR1 interacts with EDS1 and represses its activity sheds light on the molecular role of EDS1 in the nucleus, where it might regulate transcriptional reprograming by interacting with other transcription regulators. In addition, certain photoreceptor mutants (e.g., phyB) have shown some susceptible phenotypes (Kazan & Manners, 2011). This indicates that light has a profound influence on plant immunity. Our study links two key components of the phyA signaling pathway, FHY3 and FAR1, with the defense response, and we found that these two factors likely function as a node of crosstalk between light and immune signaling. Further investigation is needed to elucidate the molecular mechanisms behind their effects on the downstream gene expression of EDS1.
Supporting Material
The following supporting material is available for this article: • Figure S1. Enrichment of selected categories of GO biological processes in genes downregulated in the 2, 3, 4-week leaves of NO compared to the previous week. • Figure S2. Enrichment of selected categories of GO biological processes in genes upregulated in the 2, 3, 4-week leaves of fhy3 far1 compared to the previous week. • Figure S3. Enrichment of selected categories of KOG biological processes in genes upregulated in the 2, 3, 4-week leaves of fhy3 far1 compared to the previous week. • Table S1. List of primers used in this study. • Appendix S1. Summary of characteristic genes in the different development stage of wild type (NO) and mutant (fhy3 far1). | 2020-07-02T10:28:27.227Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "95542e8b73ec541106565c35422e314bc4490664",
"oa_license": "CCBY",
"oa_url": "https://pbsociety.org.pl/journals/index.php/asbp/article/download/asbp.8924/7922",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e99bb7df85745729997778822f905ca25c1310e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
255987269 | pes2o/s2orc | v3-fos-license | Aging-relevant human basal forebrain cholinergic neurons as a cell model for Alzheimer’s disease
Alzheimer’s disease (AD) is an adult-onset mental disorder with aging as a major risk factor. Early and progressive degeneration of basal forebrain cholinergic neurons (BFCNs) contributes substantially to cognitive impairments of AD. An aging-relevant cell model of BFCNs will critically help understand AD and identify potential therapeutics. Recent studies demonstrate that induced neurons directly reprogrammed from adult human skin fibroblasts retain aging-associated features. However, human induced BFCNs (hiBFCNs) have yet to be achieved. We examined a reprogramming procedure for the generation of aging-relevant hiBFCNs through virus-mediated expression of fate-determining transcription factors. Skin fibroblasts were obtained from healthy young persons, healthy adults and sporadic AD patients. Properties of the induced neurons were examined by immunocytochemistry, qRT-PCR, western blotting, and electrophysiology. We established a protocol for efficient generation of hiBFCNs from adult human skin fibroblasts. They show electrophysiological properties of mature neurons and express BFCN-specific markers, such as CHAT, p75NTR, ISL1, and VACHT. As a proof-of-concept, our preliminary results further reveal that hiBFCNs from sporadic AD patients exhibit time-dependent TAU hyperphosphorylation in the soma and dysfunctional nucleocytoplasmic transport activities. Aging-relevant BFCNs can be directly reprogrammed from human skin fibroblasts of healthy adults and sporadic AD patients. They show promises as an aging-relevant cell model for understanding AD pathology and may be employed for therapeutics identification for AD.
Background
The basal forebrain cholinergic system, located to the front of and below the striatum, is the predominant source of cortical cholinergic input [1]. Early and progressive degeneration of basal forebrain cholinergic neurons (BFCNs) contributes substantially to cognitive impairments of human patients with Alzheimer's disease (AD) [2,3]. The importance of BFCNs in AD is further demonstrated in animal models, the behavior of which can be significantly improved through cell grafts [4,5] or treatments promoting BFCN function [6]. As such, cell models of human BFCNs will be invaluable in understanding AD and identifying novel therapeutics.
Here, we report a protocol for direct reprogramming of adult human skin fibroblasts into electrophysiologically mature hiBFCNs. The reprogramming efficiency is similar between fibroblasts of healthy and sporadic AD patients. The reprogrammed neurons retain agingassociated features. Our preliminary results further indicate that hiBFCNs from sporadic AD patients exhibit time-dependent TAU hyperphosphorylation and impairment in nucleocytoplasmic transport. hiBFCNs may be useful for understanding the molecular mechanisms and discovering novel therapeutics for age-dependent progressive AD.
Rapid and efficient generation of hiBFCNs from adult human skin fibroblasts
We previously showed that human skin fibroblasts can be directly converted into cholinergic neurons without passing through a progenitor stage [18]. However, they lack the expression of LHX8 (also known as LHX7 or L3 [21]) and GBX1, transcription factors crucial for BFCN specification [7,12,[22][23][24][25][26]. We then examined these two transcription factors in various combinations with our original reprogramming factors (NEUROG2 and SOX11) for cholinergic neurons [18]. Two days postviral infection (dpi), transduced cells were switched to neuron-induction medium [18,19]. Neuronal conversion was monitored daily by live-cell fluorescence microscopy. Cells were replated at 14 dpi to remove most nonreprogrammed fibroblasts and were seeded into astrocytes-coated plates with maturation medium for long-term survival (Fig. 1a).
Although induced neurons (iNs) could be obtained from normal (NL) healthy patient fibroblasts, some of them expressed HB9, a transcription factor specifically expressed in spinal motor neurons. This was likely due to a dominant role of NEUROG2 in motor neuron specification [18,27,28], whereas SOX11 promotes neuronal survival but not fate reprogramming [18]. We next replaced NEUROG2 with ASCL1, since both of them work as pioneer transcription factors dominantly controlling gene expression and neuronal fates [29,30]. Furthermore, ASCL1 + progenitors can give rise to cholinergic neurons [23,[31][32][33].
Remarkably, a combination of the lentivirus ASCL1-IRES-GFP-T2A-Sox11 and LHX8-IRES-GBX1 (hereafter referred to as ASLG) enabled a majority (> 90%) of the virustransduced adult NL fibroblasts (indicated by the coexpressed GFP) to become TUJ1 + and CHAT + neuronlike cells at 28 dpi ( Fig. 1b-d). During this conversion process, cells rapidly changed their initially flat, spread-out morphology to one with bipolar and multipolar processes. They exhibited round or pyramidal somas, condensed nuclei, long axons, and multiple neurites (Fig. 1b, Additional file 1: Figure S1A). Based on our prior experience with human induced motor neurons (hiMNs) [16,18,19], we also examined a polycistronic lentiviral vector, LHX8-T2A-GBX1, so that both LHX8 and GBX1 would be expressed roughly at an equal molar ratio. However, this vector caused mass cell loss when examined at 4 dpi (Additional file 1: Figure S1B) and produced very few neurons at 28 dpi (Additional file 1: Figure S1C, D).
BFCNs are defined by their expression of neurotrophin receptor p75NTR and Trk receptors in addition to cholinergic markers [34][35][36][37]. In the basal forebrain p75NTR is colocalized exclusively with cholinergic neurons [38,39]. Immunocytochemistry showed that the reprogrammed neurons expressed markers for BFCNs, including p75NTR and the transcription factor ISL1 (Fig. 1f, g, Additional file 2: Figure S2D). ISL1 is the earliest marker of cholinergic fate neurons and it forms complexes with LHX8 or LHX3 to enhance gene expression for cholinergic specification Fig. 1 Direct induction of BFCNs from adult human skin fibroblasts. a A schematic representation of the reprogramming procedure. b Confocal images showing marker expression in hiBFCNs at 28 dpi. The virus-transduced cells are indicated by GFP fluorescence. Nuclei are counterstained with DAPI and include hiBFCNs and the co-cultured astrocytes. Scale bar, 50 μm. c Quantification of the reprogramming efficiency and neuronal purity. Cells were co-cultured with primary astrocytes and analyzed at 28 dpi (mean ± SEM; n = 3 independent samples; 10 randomly selected 20× fields per sample were examined). d-h Confocal images showing expression of the indicated markers in hiBFCNs co-cultured with astrocytes at 28 dpi. hiBFCNs do not express HB9 (h), a marker restricted to cholinergic motor neurons. Scale bar, 50 μm. i Marker expression by qRT-PCR analysis. Samples from fibroblasts, human brains, and hiMNs were used as controls. All gene expression was normalized to GAPDH [40][41][42]. More than 95% GFP + cells expressed ISL1 (Fig. 1c, g). On the other hand, these ASLG-induced neurons did not express HB9 (Fig.1h), an exclusive marker for cholinergic motor neurons as shown in hiMNs [18,19] (Additional file 2: Figure S2E). Based on these above characteristics, we named the ASLGinduced neurons as human induced BFCNs (hiBFCNs). Fibroblasts from adult AD patients could be similarly reprogrammed by ASLG into hiBFCNs (Additional file 3: Figure S3).
The molecular properties of hiBFCNs were also examined by qRT-PCR (Fig. 1i). As controls, we included samples from human brains and fibroblast-converted hiMNs. hiBFCNs showed robust expression of genes enriched in neurons (MAP2, MAPT, CALB1) and BFCNs (ISL1, NKX2.1, CHAT, VACHT, ACHE, TRKA), whereas the motor neuron-specific marker HB9 was not expressed. Due to contamination of non-converted fibroblasts in the samples, expression of fibroblast-enriched genes (S100A4, VIM) was detected but much reduced in hiBFCN samples.
hiBFCNs retain aging-associated features
To examine whether directly reprogrammed hiBFCNs maintain aging-associated features, we derived hiBFCNs from fibroblasts of young (Young) and old (Old) human patients. The latter samples consisted of fibroblasts from both aged NL and sporadic AD patients. The reprogramming efficiency, which was about 92-94%, was similar between all these patient cells (Fig. 3a). Interestingly, hiBFCNs from old patients had markedly fewer primary neurites, despite no significant difference between NL-and AD-hiBFCNs when examined at 51 dpi (Fig. 3b, c).
We performed single-cell analysis after immunocytochemistry by using a set of molecular markers that were shown to reflect age-dependent cellular characteristics. These included γH2AX, trimethylated H3K9 (H3K9me3), and heterochromatin protein 1γ (HP1γ) [14,16,43]. hiBFCNs were co-cultured with astrocytes and analyzed at 51 dpi. Consistent with our previous results [16], hiBFCNs from older donors showed a much larger number of γH2AX foci than those from younger donors (p < 0.0001; Fig. 3d, e), whereas the expression level of H3K9me3 and HP1γ was significantly reduced in old than young hiBFCNs (p < 0.0001 for HP1γ and p = 0.0446 for H3K9me3; Fig. 3d, e). Very interestingly, HP1γ level was also markedly lower in AD than NL hiBFCNs (p = 0.0162; Fig. 3f), indicating that it could be a molecular marker for the diseased neurons. In contrast, hiBFCNs exhibited no disease-associated differences in terms of γH2AX foci per cell and the expression level of nuclear H3K9me3 (Fig. 3f). Together, these results indicate that hiBFCNs from older donors indeed retain certain aging-associated features, consistent with prior reports on other directly reprogrammed neurons from human fibroblasts [14][15][16][17].
Relatively normal survival and soma size of AD-hiBFCNs
The survival of hiBFCNs was determined in co-culture with wild-type mouse astrocytes, which were required in general for neuronal growth and long-term culture. After replating at 14 dpi, survived hiBFCNs were quantified at 21 and 28 dpi. Cell counts were then normalized to the starting neuronal number for each sample at 14 dpi. The survival rates were heterogeneous among all the human samples, ranging from about 24 to 76% at 21 dpi and 20 to 62% at 28 dpi (Fig. 4a, b). Statistical analysis failed to show a significant difference between NLand AD-hiBFCNs at both time points. Similarly, both NL-and AD-hiBFCNs showed heterogeneous but not significantly different soma sizes when analyzed at 51 dpi (242-373 μm 2 ; Fig. 4c). These results indicate that AD-hiBFCNs do not exhibit intrinsic deficits on cell survival or soma sizes. ). e-l Spiking characteristics of the APs for the indicated samples at between 49 and 55 dpi (mean ± SEM; n = 5 neurons for NL1, and n = 13 neurons for NL2). m Representative sodium and potassium currents under voltage-clamp mode for a recorded hiBFCNs at 55 dpi. An enlarged view of the boxed region is shown on the right. n-p Characteristics of ions currents for the recorded samples (mean ± SEM; n = 5 neurons for NL1, and n = 13 neurons for NL2). I Na , sodium current; I A , A-type potassium current; I d , delayed-rectifier potassium currents. q A representative voltage sag evoked by hyperpolarizing currents for a hiBFCN at 55 dpi. r Quantification of sag voltages for the recorded samples (mean ± SEM; n = 4 neurons for NL1, and n = 13 neurons for NL2). s Representative I h evoked by hyperpolarizing voltage steps for a hiBFCN at 55 dpi. t Quantification of the I h currents for the recorded samples (mean ± SEM; n = 4 neurons for NL1, and n = 12 neurons for NL2)
Aging-associated TAU hyperphosphorylation in AD-hiBFCNs
AD-related tauopathy arises early in BFCNs and parallels cognitive decline [44,45]. We examined phosphorylated TAU through western blotting and immunocytochemistry by using the well-established AT8 antibody [46][47][48][49][50]. We used age-and gendermatched sample pairs cultured at the same time to reduce the potential influence of biological variabilities on phenotype. hiBFCNs were co-cultured with primary mouse astrocytes. When examined by western blotting at 28 dpi, no marked difference was observed for AT8 expression in NL-and AD-hiBFCNs (Additional file 4: Figure S4A). Since we failed to obtain enough hiBFCNs for western blotting at later culture time points, we focused our analysis on immunohistochemistry. When examined at 52 dpi and compared to the control NL1-hiBFCNs (70 YR, female, APOE3/3), AD1-hiBFCNs (62 YR, female, APOE3/4) showed much elevated hyperphosphorylated TAU in the somas (p = 0.0004; Fig. 5a, b). Interestingly, this phenotype was delayed when comparing to another sample pair from younger individuals, NL2-hiBFCNs (47 YR, male, APOE3/3) and AD2-hiBFCNs (47 YR, male, APOE3/4). The increased hyperphosphorylated TAU phenotype in AD2-hiBFCNs was not observed at the early time point 52 dpi (Fig. 5c, d); but it was evident at 62 dpi and became even more significant at 78 dpi (p = 0.0298 for 62 dpi and p = 0.0078 for 78 dpi; Fig. 5e-h). In contrast to the dysregulated TAU phosphorylation in somas, neuritic AT8 expression was similar in NL-and AD-hiBFCNs (Additional file 4: Figure S4B, C). 3 hiBFCNs retain aging-associated features. a Conversion efficiency for the indicated human fibroblast samples analyzed at 14 dpi. hiBFCNs were derived from young (2-3 years) and old (47-79 years) samples and co-cultured with mouse astrocytes (mean ± SEM; n = 3717 GFP + cells for Young samples; n = 10,323 GFP + cells for NL samples; and n = 8752 GFP + cells for AD samples). b Quantification of primary neurite numbers per neuron for the indicated samples at 51 dpi (mean ± SEM; n = 361 cells for Young and n = 1052 for Old samples; *p = 0.0175). c Primary neurite numbers per neuron for the indicated samples at 51 dpi (mean ± SEM; n = 875 for NL and n = 538 for AD samples; ns, not significant). d Representative confocal images for marker expression in the indicated samples co-cultured with astrocytes at 51 dpi. The profiles of DAPI + nuclei are outlined. Scale bar, 10 μm. e Quantifications of marker expression for the indicated young or old samples. Each dot represents a single cell (mean ± SEM; γH2AX: n = 426 for Young, n = 1263 for Old, ****p < 0.0001; HP1γ: n = 139 for Young, n = 421 for Old, ****p < 0.0001; H3K9me3: n = 146 for Young, n = 243 for Old; *p = 0.0446). f Quantifications of marker expression for the indicated NL or AD samples. Each dot represents a single cell (mean ± SEM; γH2AX: n = 621 for NL, n = 642 for AD; HP1γ: n = 200 for NL, n = 221 for AD, *p = 0.0162; H3K9me3: n = 243 for NL, n = 93 for AD; ns, not significant) Time-dependent impairment of nucleocytoplasmic transport in AD-hiBFCNs TAU hyperphosphoration leads to disrupted nucleocytoplasmic transport (NCT) in AD neurons [51]. We examined NCT activity in hiBFCNs by using a wellestablished reporter assay [15,51,52]. This reporter (2Gi2R) consists of 2xGFP containing an NES sequence (GFP-NES), an internal ribosome entry site (IRES), followed by 2xRFP containing an NLS sequence (RFP-NLS). GFP and RFP are localized in the cytoplasm and nucleus, respectively, in cells with normal NCT activity; however, such subcellular distribution of reporters will be disrupted in cells with abnormal NCT (Fig. 6a). A higher ratio of nuclear GFP to RFP (GFP nuc /RFP nuc ) represents disrupted NCT, whereas the nuclear to cytoplasmic GFP (GFP nuc /GFP cyt ) shows impaired protein export and a lower ratio of the nuclear to cytoplasmic RFP (RFP nuc /RFP cyt ) indicates compromised protein nuclear import.
The 2Gi2R reporter was introduced into hiBFCNs during the initial reprogramming process. Neurons were cocultured with primary astrocytes until analysis by immunocytochemistry. Fluorescence intensity of the reporters in individual neurons was respectively measured in the nucleus or cytoplasm based on DAPI staining from confocal image sections. While no significant differences were observed between NL-and AD-hiBFCNs at the early 28 dpi (Fig. 6b), AD-hiBFCNs showed markedly increased ratios of GFP nuc /RFP nuc or GFP nuc /GFP cyt when compared to the control NL group (p = 0.0309 for GFP nuc /RFP nuc and p = 0.0067 for GFP nuc /GFP cyt ; Fig. 6c, d). Conversely, the RFP nuc /RFP cyt ratios were significantly decreased in AD-hiBFCNs than the controls (p < 0.0001 for RFP nuc /RFP cyt ; Fig. 6c, d).To examine whether such dysregulated reporter distribution might be due to nuclear membrane breakdown, we treated 2Gi2Rexpressing hiBFCNs with leptomycin B (LMB), a potent and specific inhibitor of nuclear exports. Time-lapse live-cell confocal imaging and immunocytochemistry showed that both NL-and AD-hiBFCNs robustly responded to LMB treatments, indicating that these cells had functional nuclear membranes (Additional file 5: Figure S5A-E). Together, these results indicate that AD-hiBFCNs exhibit time-dependent impairments of NCT activities.
Discussion
BFCNs critically regulate brain function through projections to the cortex, hippocampus, and thalamus [1]. Their dysfunction is an early hallmark of AD [2,3,53,54]. Our direct induction of hiBFCNs from human patient fibroblasts provides a much-needed cell model to understand their molecular and cellular pathology in AD. These neurons retain certain agingassociated features that are critical to understating adult-onset neurodegenerative AD. Our proof-ofconcept study indeed reveals some potential defects in hiBFCNs from AD patients. These include timedependent TAU hyperphosphorylation and dysfunctional nucleocytoplasmic transport. Such preliminary results warrant future studies with additional patient samples. hiBFCNs especially those from AD patients may also be employed to screen or validate small molecules as therapeutics for AD.
The replacement of NEUROG2 with ASCL1 is critical for hiBFCNs, although both of them can work as pioneer factors during neuronal reprogramming of fibroblasts [55,56]. We previously demonstrated that human skin fibroblasts can be directly and efficiently converted into cholinergic neurons by the combined actions of Fig. 4 Cell survival and soma size. a Survival of the indicated hiBFCNs assayed at 21 dpi (mean ± SEM; ns, not significant). b Survival of the indicated hiBFCNs assayed at 28 dpi (mean ± SEM; ns, not significant). c Quantification of soma size for the indicated hiBFCNs at 51 dpi (mean ± SEM; n = 192 for NL1, n = 134 for NL2, n = 216 for NL3, n = 82 for NL4, n = 178 for AD1, n = 180 for AD2, n = 220 for AD3, and n = 69 for AD4; ns, not significant) NEUROG2, SOX11, and small molecules [18]. During the reprogramming process, NEUROG2 acts as a pioneer factor, whereas SOX11 facilitate fate transition and promote neuronal survival and maturation [18,55]. Approximately 80% of the NEUROG2 and SOX11-induced neurons are motor neuron-like with unique early expression of HB9, a key transcription factor restricted to spinal motor neurons [18]. These neurons can be further coerced into hiMNs with the inclusion of ISL1 and LHX3, two transcription factors critical for motor neuron development [57]. However, when we combined NEUROG2 and SOX11 with LHX8 and GBX1, two factors essential for BFCNs [7,12,[22][23][24][25][26], HB9 + motor neuron-like cells were still observed. Such a result clearly shows a dominant role of NEUROG2 in motor neuron reprogramming. On the other hand, replacing NEUROG2 with ASCL1 completely eliminated generation of HB9 + cells and produced hiBFCNs.
Without passing through a pluripotent stem cell state, the direct reprogramming process for hiBFCNs is rapid and efficient. When examined at 28 dpi, more than 90% of those virus-transduced cells become neurons expressing stereotypical markers for BFCNs, such as CHAT, p75NTR, ISL1, and VACHT. qRT-PCR results further confirmed BFCN lineage. hiBFCNs become mature at 49 dpi and beyond, showing typical inward sodium currents and outward potassium currents and firing repetitive APs when stimulated. They also show higher expression of L1CAM and exhibit more mature cell morphology at 78 dpi. c Representative confocal images of marker expression in the indicated hiBFCNs at 52 dpi. Scale bar, 10 μm. d Quantification of soma AT8 expression in the indicated hiBFCNs at 52 dpi (mean ± SEM; n = 66 for NL2; n = 41 for AD2; ns, not significant). e Representative confocal images of marker expression in the indicated hiBFCNs at 62 dpi. Scale bar, 10 μm. f Quantification of soma AT8 expression in the indicated hiBFCNs at 62 dpi (mean ± SEM; n = 39 for NL2; n = 25 for AD2; *p = 0.0298). g Representative confocal images of marker expression in the indicated hiBFCNs at 78 dpi. Scale bar, 10 μm. h Quantification of soma AT8 expression in the indicated hiBFCNs at 78 dpi (mean ± SEM; n = 50 for NL2; n = 30 for AD2; **p = 0.0078) During early stage of the reprogramming process, cell death is a main reason causing neuronal loss. Cell death could be caused through apoptosis, necroptosis, and other pathways [58,59]. The replating step at 14 dpi, which is important for partial purification of the converted neurons, may also result in axotomy and subsequent axonal degeneration and cell death [59]. Interestingly, both NL-and AD-hiBFCNs respond similarly to the reprogramming process and the replating procedure, as we do not detect significant differences of cell survival or soma size before 28 dpi. Nonetheless, the reprogramming procedure may be further optimized for higher neuronal yield and purity in the future. Fibroblasts from different human patients also exhibit heterogeneity, which may lead to variable virus-transduction efficiency, cell survival, and neuronal yield.
We and others have recently demonstrated that directly induced neurons from adult human fibroblasts retain aging-associated signatures, which could be erased if passing through a pluripotent stem cell stage [15][16][17]. Some of these signatures include age-specific transcriptional and epigenetic profiles and age-dependent changes on DNA damage, chromatin structure, nuclear organization, and nucleocytoplasmic compartmentalization. Retaining these aging-associated features is critically important for understanding late-onset neurodegeneration such as AD, since advanced age is the greatest known risk factor. Consistent with other directly induced neurons [14][15][16][17], our hiBFCNs also retain certain aging-associated signatures as their parental fibroblasts.
A disadvantage of directly induced neurons is their limited number and heterogeneity. Unlike ESCs or iPSCs which possess self-renewal ability, adult human fibroblasts become senescent after a limited number of passages. Directly induced neurons including hiBFCNs will be better suited for single cell analyses, such as immunocytochemistry, electrophysiology, and single-cell transcriptomics and genomics. Because long-term survival also requires co-culture with healthy astrocytes, hiBFCNs and some other directly converted neurons from adult human fibroblasts may not be well suited for biochemical analysis such as western and northern blotting. Directly induced neurons including hiBFCNs are also heterogeneous, since their parental fibroblasts cannot be single-cell cloned and be made isogenic through gene-editing. On the other hand, such heterogeneity may well resemble endogenous conditions in human patients. A good practice is to use age-and gender-
Conclusions
We established a protocol for the generation of aging relevant BFCNs from adult human skin fibroblasts including those of sporadic AD patients. To our knowledge, this is the first study in which direct lineage reprogramming bypasses pluripotency and converts fully differentiated adult somatic cells into electrophysiologically mature hiBFCNs. Our proof-of-concept study further reveals that hiBFCNs show promises as a cell model for understanding AD pathology, including tauopathy and nucleocytoplasmic transport dysfunction. The availability of these cells may also facilitate therapeutics identification for AD patients.
Animals
Wild-type C57BL/6 J mice were obtained from The Jackson Laboratory. All mice were housed under a controlled temperature and 12-h light/dark cycle with ad libitum access to food and water in the UT Southwestern animal facility. All experimental procedures and protocols were approved by the Institutional Animal Care and Use Committee at UT Southwestern.
Human fibroblast culture
All human fibroblasts from either healthy controls or sporadic AD patients with different APOE alleles were purchased from the Coriell Institute for Medical Research (Table 1). They were maintained in Fibroblast Medium (DMEM containing 15% FBS and 1% penicillin/ streptomycin) with 5% CO 2 at 37°C.
Neuron induction and culture
Direct lineage reprogramming was conducted according to a previous protocol with some modifications [18,19]. In brief, fibroblasts were seeded onto Matrigel-coated culture vessels (4.8 × 10 5 cells per 24-or 48-well plate or 3 × 10 5 cells per 6-cm dish or 1 × 10 6 cells per 10-cm dish) and cultured in Fibroblast Medium for 1 day. Then, cells were transduced with lentiviral supernatants in the presence of 8 μg/ml polybrene. Fibroblast culture medium was refreshed after overnight incubation. One day later, these cells were switched into Reprogramming Medium (C2 medium supplemented with 10 μM FSK (Sigma-Aldrich), 1 μM LDN-193189 (EMD Millipore), and 10 ng/ml FGF2 (PeproTech)). The C2 medium was composed of DMEM:F12:neurobasal (1:1:1), 0.8% N2 (Invitrogen), 0.8% B27 (Invitrogen), and 1% penicillin/ streptomycin. The Reprogramming Medium was half changed every other day until 14 dpi. These cells were dissociated with 0.05% trypsin for 3 min at 37°C and resuspended in Fibroblast Medium to quench trypsin activity. This cell suspension was then plated into a 0.1% gelatin-coated culture dish on which contaminating fibroblasts could tightly attach. About one and a half hours later, floating cells, which mainly consisted of induced neurons, were collected by centrifugation at 400 g for 3 min. Cells were resuspended into C2 medium and centrifuged again to remove cell debris. Finally, induced neurons were plated into primary astrocytes-coated plates in Maturation Medium (C2 medium supplemented with 5 μM FSK, 10 ng/ml each of BDNF, GDNF, and NT3 (PeproTech), 50 ng/ml NGFβ (PeproTech)). Unless indicated otherwise, Maturation Medium was half changed twice a week. For conversion efficiency calculation, induced neurons were plated into primary astrocytes-coated 96-well plates (at least three wells per condition). Fourteen days after replating (at 28 dpi) in maturation medium, cells were fixed and stained with antibodies for GFP, TUJ1, CHAT, or ISL1. Nuclei were counterstained with DAPI. The percentage of TUJ1 + GFP + cells among total GFP + cells was calculated as conversion efficiency. The percentage of CHAT +-TUJ1 + among TUJ1 + cells was calculated as conversion purity. The percentage of CHAT + or ISL1 + cells among total GFP + cells was also calculated. Human induced motor neurons (hiMNs) were generated essentially as described previously [19].
Immunocytochemistry
Cell cultures at the indicated time points were fixed with 4% paraformaldehyde (PFA) in PBS for 15 min at room temperature, twice-washed with PBS, and then permeabilized/blocked for 1 h in blocking solution (1 x PBS containing 0.2% Triton X-100 and 3% BSA). Primary antibodies (Table S1) in blocking solution were then added and incubated overnight at 4°C, followed by PBS washing and incubation with Alexa Fluor-conjugated corresponding secondary antibodies made in donkey (Invitrogen, 1:500). Images were obtained with a NIKON A1R confocal microscope. The mean fluorescence intensity of AT8 staining in the cytosol or neurites of each neuron was quantified using Image J.
Cell survival analysis hiBFCNs co-cultured with primary mouse astrocytes were used for survival analysis. Cortical astrocytes were prepared as previously described with modifications [20]. Briefly, cortices were dissociated with a solution containing papain (10 U/ml, with 1 mM Ca 2+ and 0.5 mM EDTA) and 1% DNase for 20 min at 37°C. Tissues were pelleted through brief centrifugation and further dissociated using a pipette in FBS-containing medium. Cells were passed through a 40 μm nylon strainer. The cell mixture was spun at 400 g for 3 min and re-suspended in growth media consisting of DMEM (Invitrogen) supplemented with 10% FBS and plated into 0.1% gelatin coated 75 cm 2 flasks. Media was exchanged every 3 days. Endogenous mouse neurons and non-astrocytes were removed by vigorous shaking and a few cycles of passaging, freezing, thawing, and replating. hiBFCNs at 14 dpi were replated into astrocyte-coated 96-well plates (survival analysis) or coverslip-containing 24-well plates (for neuronal morphology analysis Supplementary Table S2 and their quality was assessed by the dissociation curve. Relative gene expression was determined by using the 2 -ΔΔCt method after normalization to the loading control GAPDH.
Nucleocytoplasmic transport
The 2Gi2R reporter-expressing lentivirus was included during the reprogramming of fibroblasts to hiBFCNs. Cells were replated onto astrocyte-coated and Matrigeltreated coverslips. At the indicated time points, cells were fixed with 4% PFA, followed by immunostaining with antibodies against GFP, RFP, and TUJ1. Nuclei were counterstained with DAPI. For live-cell imaging, cells at the indicated time point were maintained at 37°C with 5% CO 2 under the Nikon A1R confocal microscope system. The target cells were located under a 60x objective and imaged as 0 min. Culture medium was then replaced with prewarmed medium containing 50 nM leptomycin B (LMB). Cells were subsequently imaged every 10 min for a total of 1 h, followed by fixation with 4% PFA and immunocytochemistry. Images of single confocal plane across the center of the nucleus were obtained on the NIKON A1R confocal microscope with a pinhole setting at 2.5. Because of the complexity of neuron-astrocyte co-cultures, neuronal nucleus and soma were manually defined by using the Image J program. The mean fluorescence intensity of GFP or RFP was separately measured in the cytoplasm or nucleus of hiBFCNs. The ratios of GFP nuc /RFP nuc , GFP nuc /GFP cyt and RFP nuc /RFP cyt were calculated by Microsoft Excel and analyzed by GraphPad Prism 8.
Electrophysiology
Whole-cell patch-clamp recordings were made under visual guidance using infra-red differential interference contrast (IR-DIC) and GFP fluorescence to identify GFP + cells. For analysis of intrinsic neuronal properties, cells were maintained at 30°C in a submersion chamber with Tyrode solution containing 150 mM NaCl, 4 mM KCl, 2 mM MgCl 2 , 3 mM CaCl 2 , 10 mM glucose, and 10 mM HEPES at pH 7.4 (adjusted with KOH) and 300 mOsm. Whole-cell recordings were performed on induced neurons using recording pipettes (approximately 5-9 MΩ) filled with intracellular solution (0.2 mM EGTA, 130 mM K-Gluconate, 6 mM KCl, 3 mM NaCl, 10 mM HEPES, 4 mM ATG-Mg, 0.4 mM GTP-Na, 14 mM phosphocreatine-di(Tris) at pH 7.2 (adjusted by KOH) and 285 mOsm). Series and input resistance were measured in voltage-clamp mode with a 400 ms, 10 mV step from a − 60 mV holding potential (filtered at 10 kHz, sampled at 50 kHz). Cells were only accepted for analysis if the series resistance was less than 30 MΩ and stable (< 10% change) throughout the experiment. Input resistance ranged from 0.2 to 2 GΩ. Currents were filtered at 3 kHz, acquired and digitized at 10 kHz on a PC using Clampex10.3 software (Molecular Devices). A MultiClamp 700B amplifier (Molecular Devices, Palo Alto, CA) was used for recordings.
Action potentials were recorded in current clamp mode and elicited by a series of current injections starting from − 20 to 200 pA with 20-pA increments and 800-ms in duration. Sodium and potassium currents were recorded in voltage-clamp mode in response to a series of voltage steps ranging from − 60 to + 60 mV at 10-mV increments and 250-ms duration according to standard protocols. Sag voltage was recorded in currentclamp mode with hyperpolarizing current (− 80 to − 150 pA, 500 ms). I h was recorded in voltage-clamp mode by injecting voltage steps from − 110 to − 40 mV with 10-mV increments for 6 s duration, and an average of 10 traces. In all voltage-clamp recordings, cells were clamped at − 60 mV except during the voltage-step protocol. In all current-clamp recordings, recordings were made at resting membrane potential or without any current injection except otherwise stated.
Data analysis was performed using Clamp-fit 10.3 software (Molecular Devices). The action potential (AP) was analyzed as described previously [19]. The AP trace immediately above threshold was used to determine the delay of 1st spike as the length of time from the start of current steps to the peak of AP. The same AP trace was used to measure AP threshold as the corresponding voltage when there was the sharpest change of trace slope. The above-indicated AP trace was also used to determine AP amplitude, halfwidth, maximum velocity of rise, and decay slope using the "Statistics" function from the "Analyze" menu. AP frequency was obtained by dividing the maximum number of spikes during the current steps protocol with the step time duration (800 ms).
Similarly, sodium and potassium currents were measured using the "Statistics" function. The biggest current was used.
Statistical analysis
All experiments were performed at least twice in triplicate unless otherwise indicated. Data were presented as mean ± SEM. One-way ANOVA or unpaired Student's ttest was used to calculate statistical significance in GraphPad Prism. Significant differences are indicated by *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001. | 2023-01-19T21:01:37.013Z | 2020-10-21T00:00:00.000 | {
"year": 2020,
"sha1": "6f47312848ee06af52e4512add2bd20b4c91daac",
"oa_license": "CCBY",
"oa_url": "https://molecularneurodegeneration.biomedcentral.com/track/pdf/10.1186/s13024-020-00411-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6f47312848ee06af52e4512add2bd20b4c91daac",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
236926800 | pes2o/s2orc | v3-fos-license | Incidence and risk factors for persistent symptoms in adults previously hospitalized for COVID‐19
Abstract Background The long‐term sequalae of COVID‐19 remain poorly characterized. We assessed persistent symptoms in previously hospitalized patients with COVID‐19 and assessed potential risk factors. Methods Data were collected from patients discharged from 4 hospitals in Moscow, Russia between 8 April and 10 July 2020. Participants were interviewed via telephone using an ISARIC Long‐term Follow‐up Study questionnaire. Results 2,649 of 4755 (56%) discharged patients were successfully evaluated, at median 218 (IQR 200, 236) days post‐discharge. COVID‐19 diagnosis was clinical in 1291 and molecular in 1358. Most cases were mild, but 902 (34%) required supplemental oxygen and 68 (2.6%) needed ventilatory support. Median age was 56 years (IQR 46, 66) and 1,353 (51.1%) were women. Persistent symptoms were reported by 1247 (47.1%) participants, with fatigue (21.2%), shortness of breath (14.5%) and forgetfulness (9.1%) the most common symptoms and chronic fatigue (25%) and respiratory (17.2%) the most common symptom categories. Female sex was associated with any persistent symptom category OR 1.83 (95% CI 1.55 to 2.17) with association being strongest for dermatological (3.26, 2.36 to 4.57) symptoms. Asthma and chronic pulmonary disease were not associated with persistent symptoms overall, but asthma was associated with neurological (1.95, 1.25 to 2.98) and mood and behavioural changes (2.02, 1.24 to 3.18), and chronic pulmonary disease was associated with chronic fatigue (1.68, 1.21 to 2.32). Conclusions Almost half of adults admitted to hospital due to COVID‐19 reported persistent symptoms 6 to 8 months after discharge. Fatigue and respiratory symptoms were most common, and female sex was associated with persistent symptoms.
| INTRODUC TI ON
The emergence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has placed a significant burden on health services and society worldwide. There have now been well over 100 million coronavirus disease 2019 (COVID-19) cases reported with a mortality rate of around 2.2%globally. 1 The acute presentation of COVID-19 has now been well investigated, with fever, cough, shortness of breath and anosmia among the most commonly reported symptoms. [2][3][4] It has become evident that a substantial proportion of people experience ongoing symptoms including fatigue and muscle weakness, joint and muscle pain, and breathlessness, months after the acute phase of COVID-19. [5][6][7] This phenomenon is now commonly referred to as Long COVID but has also been described as post-COVID syndrome, Post-Acute Sequelae of SARS-CoV-2 infection (PASC), the post-COVID-19 condition 8 or patients have been labelled COVID long-haulers. 9,10 There is still a paucity of long-term follow-up data, which means we have limited knowledge of the full range of symptoms, duration of disease and potential risk factors. Recently published data from China describing long-term consequences of show that 76% of previously hospitalized adult patients have at least one symptom 6 months after acute infection. 6 In a UK registry study of 47,780 previously hospitalized adults, 29.4% were readmitted and 12.3% died after initial discharge with multi-organ dysfunction. 11 There is an urgent need for accurate long-term follow-up of COVID-19 patients, 7 to inform future management plans and address the devastating impacts of this condition on the quality of life (QoL) of people affected. This observational cohort study aimed to investigate the incidence of long-term consequences in adults previously hospitalized for COVID-19 and to assess risk factors for Long COVID in Moscow, Russia. We used the standardized follow-up data collection protocol of the International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC).
| Study design, setting and participants
This is a longitudinal cohort study of patients with suspected or confirmed COVID-19 infection admitted to Sechenov University Hospital Network (four tertiary hospitals) in Moscow, Russia.
We collected the follow-up data between 2 December 2020 and 14 January 2021 from patients discharged between 8 April 2020 and July, 2020. We included adult patients (≥18 years of age), with either reverse transcriptase polymerase chain reaction (RT-PCR) confirmed SARS-CoV-2 infection and clinically confirmed infection, when the laboratory testing result is negative, inconclusive or unavailable.
The acute phase data, including comorbidities and disease severity, were extracted from electronic medical records (EMR) and the Local Health Information System (HIS) at the host institution using the modified and translated ISARIC WHO Clinical Characterisation protocol (CCP). 12 Details of the acute phase data collection are described elsewhere. 3
| Data management
We used REDCap electronic data capture tools (Vanderbilt University, Nashville, TN, USA) hosted at Sechenov University and Microsoft Excel (Microsoft Corp) for data collection, storage and management. 11, 12 The baseline characteristics, including demographics, symptoms on admission and comorbidities, had been extracted from EMRs and entered into REDCap previously.
| Definitions
The acute disease severity was stratified in accordance with Arnold et al. 10 by a three-category scale based on the degree of required supportive care during hospital stay: mild (no supplementary oxygen or intensive care), moderate (supplementary oxygen during hospitalization) and severe (need for non-invasive respiratory modalities (NIV), invasive mechanical ventilation (IMV) and/or admission to intensive care unit (ICU)). A difference of 10 points at EQ-VAS defined relevant change in the health status. 4 All comorbidities were reported by the patients and/or family members at the time of the hospital admission and subsequently double checked during the follow-up telephone interview.
For the purpose of this study, we defined "persistent symptoms" (PS) as symptoms present since hospital discharge only.
KEY MESSAGES
• 6-8 months after hospital discharge, around a half of patients with Covid-19 experienced persistent symptoms • Chronic fatigue and respiratory problems were the commonest persistent symptoms, with 11.3% having multisystem involvement • Female sex was associated with higher risk of persistent symptoms PS present at the time of follow-up were categorized into respiratory, gastrointestinal, dermatological, chronic fatigue, neurological, mood and behaviour, sensory (Table S1). Symptom categorization was based on previously published literature 14,15 and international expert group discussions.
| Statistical analysis
Descriptive statistics were calculated for baseline characteristics.
Continuous variables were summarized as median (with interquartile range) and categorical variables as frequency (percentage). The chisquared test or Fisher's exact test was used for testing differences in proportions between groups. The Wilcoxon rank-sum test was used for testing the hypotheses about differences in means between the groups.
We performed multivariable logistic regression to investigate associations of demographic characteristics, comorbidities and severity of acute phase COVID-19 with PS categories presence at the time of the follow-up interview. To enhance the robustness of the effect estimates, only comorbidities that were present in at least 3% of the cohort were included in the modelling. Primary analysis was performed using the full data set, whereas sensitivity analysis included only a subset of people with RT-PCR confirmed SARS-CoV-2 infection (ICD U07.1). We have previously found no significant differences in clinical signs, symptoms, laboratory test results and risk factors for in-hospital mortality between clinically diagnosed patients and patients with positive RT-PCR. 3 Therefore, primary analysis was performed using the full cohort. Robustness of findings was then investigated via sensitivity analysis which included only a subset with confirmed SARS-CoV-2 infection. We have not performed any imputation for missing data.
Venn diagrams were used to present the coexistence of the five most common persistent symptoms.
Two-sided p-values were reported for all statistical tests, a pvalue below 0.05 was considered to be statistically significant.
Statistical analysis was performed using R version 3.5.1.
| Description of study population
As outlined in Figure 1, out of 5,040 patients hospitalized with suspected COVID-19 to the hospitals before 10 July 2020, 4,755 were discharged alive or transferred to another facility. Out of 4,019 patients with accurate contact information available, 2,649 were available for follow-up (response rate 68.5%), 2,649 of whom had no missing baseline data in the electronic database and were included in the analysis. Of the 3,868 patients with contact information available 52 (1.3%) died after the hospital discharge.
Analysis of the non-response data was performed and Table S3.
| Symptoms at the time of follow-up
Although many patients had PS since discharge, some participants reported at least one symptom of a differing duration during follow-up interview; 285 (10.8%) had experienced these symptoms for 3 to 6 months, 179 (6.8%) between 2 and 3 months, 157 (5.9%) between 1 and 2 months, 103 (3.9%) between 2 and 4 weeks and 140 (5.3%) between 1 and 2 weeks, respectively. The duration of the ten common symptoms at the time of the follow-up is shown in Figure S1.
| Risk factors associated with persistent symptom categories
Risk factors for all categories were assessed. In multivariable regression analysis, female sex was a predictor of "any" PS category with an Table 2, and forest plots are available as supplementary material. (Table S4). patients reported at least one PS, with chronic fatigue and respiratory problems being the most frequent PS categories. One in ten patients reported multisystem impacts with three or more categories of PS symptoms present at follow-up. PS were experienced by both sexes, with a higher risk amongst women. Pre-existing chronic pulmonary disease was associated with chronic fatigue, and asthma with a higher risk of neurological symptoms and mood and behaviour problems.
| Persistent symptoms
Other studies of previously hospitalized and non-hospitalized COVID-19 patients reported presence of short-and long-term symptoms. 16
| Risk factors associated with persistent symptoms
Female sex was significantly associated with an increased risk of PS, regardless of symptom category, reflecting previous findings 6 and digital App 19 studies. Chronic pulmonary disease was a risk factor for the development of chronic fatigue. An association between chronic pulmonary disease and severe acute COVID-19 was found in many F I G U R E 5 Multivariable logistic regression model. Odds ratios and 95% CIs for "Chronic fatigue" category of persistent symptoms at the time of follow-up. Abbreviation: CI, confidence interval. (A) primary analysis (age, sex, comorbidities, severity and RT-PCR were included as potential risk factors); (B) sensitivity analysis (performed in a subgroup of RT-PCR positive patients only)
TA B L E 2
Risk factors significantly associated with the different categories of persistent symptoms in the primary (age, sex, comorbidities, severity and RT-PCR were included as potential risk factors) and sensitivity (performed in a subgroup of RT-PCR positive patients only) multivariable regression analyses Abbreviations: OR, odds ratio; CI, confidence interval; NA, not applicable; RT-PCR "+," real-time polymerase chain reaction confirmed SARS-CoV-2 infection.
*The assessment of robustness is based on the magnitude, direction and/or statistical significance of the estimates. † Number of patients with at least one persistent symptom from this category. Statistically significant associations are presented in bold.
studies, 20 but it has not been previously reported as a risk factor for COVID-19 sequelae. The presence of chronic pulmonary disease has been previously associated with chronic fatigue syndrome. 21 The pandemic also had a significant adverse impact on care and support for patients with chronic pulmonary conditions, including a reduction in face-to-face clinic availability, lack of access to pulmonary rehabilitation sessions and hospital care during an exacerbation due to fear of COVID-19 exposure. 22 The causality cannot be determined and we are unable to conclude if lack of follow-up and involvement in rehabilitation programmes for chronic pulmonary conditions was the cause of ongoing symptoms. Future research should investigate COVID-19 consequences in this group of patients in greater detail.
Data from the COVID Symptom Study app in the UK suggested that asthma is a risk factor for post-COVID condition. 23 However, it did not separate ongoing respiratory symptoms which may have been due to incitement of the pre-existing asthma from those in other systems. We found that asthma was associated with an in-
| Health state
Patients with all categories of PS reported significantly lower health state when compared with symptom-free patients. They also considered the health state to be lower than before the COVID-19 episode. This is consistent with previous reports from different countries. 5,6,27 This finding points to the multi-factorial adverse effects of COVID-19 and to the need for wide ranging and longer term support.
| Strengths and limitations
A major strength of this study is the use of pre-positioned data col- Future studies should focus on patients with multisystem involvement and longer follow-up of a large sample will allow for a better understanding of COVID-19 sequelae and help with the phenotype recognition. Investigation of immunological aspects of the association between asthma and several long-COVID outcomes may identify mechanisms and therapeutic targets for therapy to mitigate adverse consequences.
ACK N OWLED G EM ENTS
We thank RFBR, grant 20-04-60063 for supporting the work. We would also like to thank UK Embassy in Moscow for providing a | 2021-08-06T06:17:51.958Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "8cf1595cde56bc9f5eaea39d96e7029c45176782",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cea.13997",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a5af7b0edb69b51a3108a23dce188a65f0bdb83",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119184498 | pes2o/s2orc | v3-fos-license | Gauge Backgrounds and Zero-Mode Counting in F-Theory
Computing the exact spectrum of charged massless matter is a crucial step towards understanding the effective field theory describing F-theory vacua in four dimensions. In this work we further develop a coherent framework to determine the charged massless matter in F-theory compactified on elliptic fourfolds, and demonstrate its application in a concrete example. The gauge background is represented, via duality with M-theory, by algebraic cycles modulo rational equivalence. Intersection theory within the Chow ring allows us to extract coherent sheaves on the base of the elliptic fibration whose cohomology groups encode the charged zero-mode spectrum. The dimensions of these cohomology groups are computed with the help of modern techniques from algebraic geometry, which we implement in the software gap. We exemplify this approach in models with an Abelian and non-Abelian gauge group and observe jumps in the exact massless spectrum as the complex structure moduli are varied. An extended mathematical appendix gives a self-contained introduction to the algebro-geometric concepts underlying our framework.
Introduction
String theory encodes the consistent coupling of gauge dynamics to gravity like no other framework for quantum gravity available to date. In compactifications of string theory to lower dimensions, a significant portion of this information is encapsulated in the geometry of the compactification space. A particularly coherent approach to studying the ensuing relations between geometry, gauge theory and gravity has emerged in the context of F-theory [1][2][3]. In this spirit F-theory compactifications to six dimensions have been under close scrutiny over the past years with the aim of establishing an ever more accurate dictionary between the properties of the effective field theory and the geometry and topology of elliptically fibred Calabi-Yau three-folds. This has culminated so far in the classification of end points of Higgs branches in six-dimensional (1, 0) theories [4] as well as of the possible (1, 0) superconformal field theories with a tensor branch [5,6] arising from F-theory.
When we try to extend the programme of understanding the effective field theories of F-theory compactifications to lower dimensions, more intricate structures are encountered which play no role in six-dimensional F-theory vacua. F-theory compactifcations to four dimensions are clearly motivated not only by their possible connections to particle physics [7][8][9][10], but also because they may open yet another door towards understanding the structure of four-dimensional N = 1 supersymmetric quantum field theories. F-theory compactifications to two dimensions [11,12] give a framework for studying new examples of chiral (0, 2) quantum field theories and SCFTs.
The perhaps most important difference in compactifications to four and two dimensions compared to their six-dimensional cousins is the appearance of non-trivial gauge backgrounds, which are of the utmost relevance for the very definition of the string vacuum. First, such backgrounds can generate potential terms in the effective action. The induced D-terms and F-terms are moduli dependent and play an essential role in stabilising the latter. Second, and not unrelatedly, the spectrum of charged massless matter fields depends on the gauge background. The programme of understanding F-theory compactifications to four and two dimensions therefore hinges on our ability to extract this information from a given flux background. The present article reports on what we believe is important progress in this direction.
The first step is to represent the gauge background in a globally defined F-theory vacuum in a computationally accessible way. For definiteness, let us from now on focus on F-theory compactified to four dimensions on an elliptically fibred Calabi-Yau 4-fold Y 4 . By duality with M-theory massless matter states arise from the excitations of M2-branes wrapping vanishing cycles in the elliptic fibre. Since the M2-branes couple to the M-theory 3-form potential C 3 , our task is to represent the gauge data encoded in this anti-symmetric gauge field. Mathematically, such gauge backgrounds are encapsulated in the Deligne cohomology group H 4 D ( Y 4 , Z(2)), whose construction is reviewed e.g. in [13]. 1 The Deligne cohomology contains both information about the gauge flux, i.e. the background value of the M-theory 4-form field strength G 4 , and about the flat, but topologically non-trivial configurations of the gauge potential. In [22] it has been described how this data is most conveniently represented by equivalence classes of algebraic 2-cycles modulo rational equivalence, i.e. by elements of the Chow group CH 2 ( Y 4 ). This construction rests on the existence (but fortunately not the details) of a refined cycle map, known in the mathematics literature (e.g. [13]) to be a ring homomorphism from the Chow group to the Deligne cohomology group which is surjective over Q if the Hodge conjecture holds. The advantage of this approach is that, unlike H 4 D ( Y 4 , Z(2)), the group of Chow classes is accessible very directly and in a constructive, geometric way suitable for explicit computations. 2 The second step is then to extract from this geometric data the gauge bundle to which the zero-modes of wrapped M2-branes couple [22]. The zero-modes localised on intersection curves of two 7-branes on the base of Y 4 are known to transform as cohomology classes of certain gauge bundles twisted with the spin bundle of the matter curve. This follows already from the local approach to F-theory by studying the topologically half-twisted field theory on the worldvolume of the 7-branes [7,8]. The zero-modes in the topologically twisted field theory are to be identified with the zero-modes arising by quantisation of the moduli space of wrapped M2-branes in the spirit of [25]. The 1-form gauge potential to which these excitations couple is obtained by integrating C 3 over the cycle wrapped by the M2-brane in the fibre. Mathematically, this operation has a clear and well-defined meaning in the language of the intersection product within the Chow ring [22], as we review in Appendix A. Having established the gauge bundle whose cohomology groups count the massless matter states using this machinery, the third and final step consists in evaluating these cohomology groups.
In this work we make substantial progress along all three of these steps. In Section 3 we systematically explore the gauge backgrounds underlying so-called vertical gauge fluxes in Ftheory. The idea is that each matter surface defines by itself a Chow class and hence a gauge background. If this gauge background is to preserve the non-Abelian gauge group in the F-theory limit extra modification terms have to added. The resulting Chow class defines a matter surface flux. 3 The relation between matter surfaces and gauge backgrounds has first been observed in [26] and further described in [27]. If the homology class of the matter surface is vertical, then the associated flux can alternatively be described as a linear combination of a basis of H 2,2 vert ( Y 4 ), as investigated systematically in [26][27][28][29][30][31][32][33][34][35][36][37][38].
Equipped with this representation of gauge background data, we systematically develop the intersection theoretic operations [22] which allow us to extract the relevant gauge bundles on the matter curves. We observe that, as a consequence of non-trivial relations among Chow classes, the intersections of interest can be chosen to be transverse. The Chow relations in question are in fact deeply related to the absence of gauge anomalies in F-theory and are hence of interest by themselves. We will develop this interesting point further in [39]. As it turns out, the relevant gauge bundle on the matter curve can in general not be obtained as the pullback of a line defined on an ambient space of the curve such as the 7-brane divisor or the base of the fibration. This means that its pushforward to the base merely defines a (proper) coherent sheaf.
The remaining task is hence to develop the suitable machinery which allows us to compute the cohomology groups of such coherent sheaves and thereby the massless matter content of an F-theory compactification. A general framework addressing precisely this point has evolved in computational algebraic geometry [40][41][42][43][44][45]. The idea is very simple: The coherent sheaf in question is defined in terms of the (Chow) class of certain points on B 3 . It is precisely these point classes which our intersection theoretic operations provide. The point classes are given very explicitly in terms of the vanishing locus of a set of functions. This defines an ideal within the coordinate ring of the space. In algebraic geometry, such ideals can be translated into sheaves, more precisely into their associated ideal sheaves. This data is represented with the help of an object known as an f.p. module, which, when the dust has settled, is nothing but a matrix encoding the relations between the functions generating the vanishing ideal. Finally, the cohomology groups we are after translate into suitable extension modules associated with this module. We describe these steps in Section 6. In order to make this article self-contained and more accessible to non-experts we are including an extended Appendix D which provides the required mathematical background and fills in the more technical details.
The computation of the extension groups can be performed algorithmically with the help of the computer programme gap [46] and is phrased in the language of categorical programming of CAP [47][48][49]. We exemplify the use of this technology by computing the massless spectrum in a family of four-dimensional F-theory compactifications with gauge group SU (5) × U (1) over base P 3 . In particular we observe an explicit dependence of the massless matter spectrum on the complex structure moduli defining the elliptic fibration.
This article is organized as follows: We begin by reviewing, in Section 2, how Chow groups encode the gauge background in F-theory [22], mathematical details being relegated to Appendix A. In Section 3 we systematically describe the vertical gauge backgrounds including in particular the matter surface fluxes. We also flesh out how to concretely apply intersection theory within the Chow ring to deduce the gauge bundles to which the massless zero-modes couple. To exemplify this general technology we classify, in Section 4, the matter surface fluxes in an F-theory fibration with gauge group SU (5)×U (1). Even though the geometry of this fibration has already been worked out in [29], we collect all the required data, in particular the explicit structure of the fibre in various codimensions, in Appendix B in order to make this article self-contained. Section 5 contains the intersection theoretic manipulations for the vertical gauge backgrounds of associated with this Chow class in CH 2 ( Y4) as this is what represents the flux G4, while the Chow class in general encodes much more information about the gauge background than merely the curvature. this model, performed in two different ways. Further computational technicalities are collected in Appendix C. The result is an explicit parametrization of the gauge bundles on the matter curves in terms of vanishing ideals. In Section 6 and Appendix D we explain how this data can be translated into a so-called f.p. graded module such that the computation of the cohomology groups can be performed with the help of computer algebra. Applying this technology to our example geometry we compute the exact charged massless matter for different choices of complex structure and observe jumps in the number of vector-like pairs as we wander in the moduli space. We conclude with a list of further directions of research in Section 7.
G 4 -Fluxes, Massless Spectra and Chow Groups
To set the stage we begin with a brief review of the essentials of F-theory compactifications to four dimensions, with special emphasis on the formulation of the gauge background following [22]. In particular we make the connection between the well established counting of charged bulk and localised zero-modes in the local topologically twisted field theory approach to F-theory and the global definition of the gauge background via Chow groups. More details can be found in Appendix A.
F-Theory Compactifications on Smooth 4-Folds
We are compactifying F-theory to four dimensions on an elliptically fibred Calabi-Yau 4-fold Y 4 with projection map π : Y 4 → B 3 . By definition, the generic fibre is a smooth elliptic curve. It degenerates and becomes singular over the discriminant locus ∆ ⊆ B 3 . A smooth resolution π : Y 4 ։ B 3 (2.1) of the singularities of Y 4 can always be obtained by replacing singular fibres by a finite number of P 1 s.
The discriminant ∆ over which the fibre degenerates is decomposed of irreducible varieties Over the irreducible components ∆ I , the extended Dynkin diagram of a Lie algebra g I is represented by the topology of the P 1 s in the resolved fibre. 6 As a consequence, vector multiplets in the adjoint representation of g I are found to propagate on ∆ I . These gauge degrees of freedom can be described by a topologically twisted 8-dimensional N = 1 supersymmetric gauge theory [7,8], where the gauge background data is provided by vector bundles on ∆ I .
Additional localised matter arises on the so-called matter curves. Such a matter curve C IJ ⊆ B 3 is a codimension-2 subvariety along which an intersection ∆ I · ∆ J of two of the irreducible components ∆ I of the discriminant ∆ occurs. This includes possible self-intersections, i.e. I = J. Typically, the fibre structure over the matter curve C IJ represents the extended Dynkin diagram of a Lie algebra h IJ into which the Lie algebras g I and g J are embedded. 7 In the limit of vanishing fibre volume, M2-branes that wrap suitable linear combinations of P 1 s in the fibre over C IJ give rise to massless matter in some representation R of g I ⊕ g J . Therefore, to each state of the representation R one can associate a complex 2-cycle. More precisely, let us label by a = 1, . . . dim(R) the different states in representation R and by β a (R) the associated weight vector. Then to each β a (R) there exists a complex 2-cycle S a R , termed matter surface, which is given as the linear combination of the fibre P 1 s over the matter curve C IJ associated with the corresponding state. Therefore one usually denotes this matter curve as C R , and we will indeed adapt this notation in the remainder of this article. We will give more details on the localised massless states in Section 2.2.2.
In addition to the geometric information encoded in the fibration π : Y 4 ։ B 3 , defining an Ftheory compactification to four dimensions requires specifying the gauge background. By duality with M-theory, the gauge background is encoded in the background data of the M-theory 3-form C 3 and its field strength G 4 . There exists a precise mathematical characterisation of this data in terms of a Deligne cohomology class of Y 4 , c.f. Appendix A. 8 The Deligne cohomology class H 4 D ( Y 4 , Z(2)) [69] has been explored to describe the necessary gauge data both in M-theory and F-theory in [14][15][16]24] and [17][18][19][20][21][22], respectively.
The key observation is that to each 3-form gauge background we can on the one hand associate a quantised field strength G 4 , or flux, but in addition it is necessary to specify the holonomies of C 3 around non-trivial 3-cycles which correspond to flat gauge backgrounds, or Wilson 'lines'. This implies that the full gauge background data, as given by elements of H 4 D ( Y 4 , Z(2)), fits into the short exact sequence 9 Elements of H 4 D ( Y 4 , Z(2)) can in turn be represented by elements A ∈ CH 2 ( Y 4 ), i.e. by complex 2-cycles modulo rational equivalence [22]. As we review further in Appendix A, rational equivalence for complex p-cycles is the direct generalisation of linear equivalence for divisors. This means that two p-cycles on Y 4 are rationally equivalent if their difference can be written as the zeroes or poles of a meromorphic function on a (p + 1)-dimensional irreducible subvariety of Y 4 . The group of p-cycles up to such equivalence is the Chow group CH p ( Y 4 ), while we denote the group of codimension p cycles up to rational equivalence as CH p ( Y 4 ). In particular, the divisor class group Cl( Y 4 ) hence coincides with CH 1 ( Y 4 ) on the smooth variety Y 4 .
What makes Chow groups relevant to characterise the gauge background on Y 4 is the existence of a group homomorphism called refined cycle map (see e.g. p. 123 in [70]) which gives rise to a ring homomorphism (2.5) The case p = 2 is of particular importance to us because the map allows us to associate to each class of complex 2-cycles an element in H 4 D ( Y 4 , Z(2)) defining the gauge background.
Fortunately, the concrete form of γ is not required for our computations, as we will see. This map is surjective over Q if and only if the Hodge conjecture holds. It is in general not injective. This means that two different Chow classes may in principle map to the same element in H 4 D ( Y 4 , Z(2)). This is not a big drawback for our purposes. What will be crucial is rather the fact that two algebraic cycles which are rationally equivalent are guaranteed to map to the same element in H 4 D ( Y 4 , Z (2)). This means that manipulations modulo rational equivalence do not change the 3-form background parametrised by an algebraic 2-cycle. This fact comes in particularly handy whenever the 2-cycles representing the gauge background are in turn obtained by pullback from an ambient space on which the Chow group is explicitly known. An example would be a situation where Y 4 is embedded into a smooth toric ambient space X Σ . On such smooth X Σ , rational equivalence and homological equivalence coincide. Therefore we can manipulate the underlying 2-cycle as long as we keep its homology class on X Σ fixed, and are guaranteed that the gauge data remains unchanged. This important property and a more precise summary of the formalism of [22] are reviewed further in Appendix A. Fans of commutative diagrams can find all information encrypted in Figure 4 therein.
The cohomology class associated with such a 2-cycle class in CH 2 ( Y 4 ) is identified with the 4-form flux G 4 ∈ H 2,2 ( Y 4 ) [71][72][73]. However, unlike the parametrisation via CH 2 ( Y 4 ), keeping track only of G 4 merely accounts for part of the information of the gauge background and is hence in general incomplete. Such a (class of) differential forms is then supposed to have "one leg along the fibre" [73]. Phrased explicitly, this means that the two transversality constraints must be enforced for every element is the class of the zero section of the fibration π : Y 4 ։ B 3 , whose existence we assumed as stated above. 10 Furthermore, in combination with cohomlogy classes, the operation · denotes the intersection product in the cohomology ring, here on Y 4 . We will come back to these conditions in Section 3.1.
The class G 4 of differential forms is actually required to be an element of H 2,2 Z/2 ( Y 4 ) as a consequence of the quantisation condition [74] is specified, the chiral index of massless matter in representation R localised on the matter curve C R can be obtained by evaluating the integral [7,24,26,[28][29][30][31]75], (2.9) More precisely, (2.9) counts the chiral index of the state with weight β a (R). If the gauge flux does not break the non-Abelian gauge algebra this result is independent of the particular state β a (R), or equivalently of the matter surface S a R , a = 1, . . . , R. Note that χ R ∈ Z as a consequence of [31,55,76] 1 2 For conceptual clarity, we would like to end this review with a word of caution. As we have stressed, we are working on a smooth resolution Y 4 of the elliptic fibration, which allows us to avoid dealing with singularities. Resolving Y 4 → Y 4 by blowing-up the singularities in the fibre is well-known to correspond to moving to the Coulomb branch of the dual 3d M-theory vacuum. The non-Abelian gauge symmetry is restored in the dual 4d F-theory vacuum after taking the fibre volume to zero. Nonetheless, this means that by working with Y 4 we are only able to detect Abelian gauge backgrounds. Non-Abelian gauge bundles on the 7-branes, by contrast, cannot be encoded in the Deligne cohomology or the Chow groups of the smooth space. This is only a minor drawback for the applications envisaged in this paper because non-Abelian gauge backgrounds would break the gauge algebra on the 7-branes, while Abelian gauge backgrounds are already sufficient to generate a chiral spectrum. Nonetheless it would be interesting to explore such more general backgrounds directly in M-theory, e.g. by studying the Deligne cohomology of the singular 4-fold Y 4 (see e.g. [21]) or by further developing alternative techniques such as [77,78].
Cohomologies in Local Setups via Topological Twist
So far we have reviewed the definition of an F-theory vacuum in terms of a globally consistent elliptic fibration with 3-form background. Locally, the gauge theory on the 7-branes enjoys a description as a partially topologically twisted 8d Super-Yang-Mills theory [7,8]. In this language, the definition both of the gauge background and the computation of the exact massless matter content is well established. As in [22], our strategy is to extract the local gauge data from the globally defined geometry and 3-form background, which then allows us to determine the massless matter content. To prepare ourselves for this task, we now briefly collect the well-known description of massless matter in the local approach via topologically twisted gauge theory.
Bulk Matter
Let us begin by recalling the nature of the so-called bulk 7-brane matter on a non-Abelian brane stack. A stack of 7-branes wrapping the component ∆ I of the discriminant locus carries, in addition to a gauge multiplet, massless 4d N = 1 chiral multiplets in the adjoint representation of the non-Abelian gauge algebra g I underlying the gauge group G I . As stressed at the end of the previous section, we restrict ourselves to Abelian gauge backgrounds. Locally, these are described by a line bundle L I in the Picard group Pic(∆ I ) on the 7-brane stack on ∆ I . Embedding the structure group U (1) into irreducible representations of H I . Then the massless bulk matter in representation r m I on ∆ I is counted by the cohomology groups [7,8,79,80] H 0 (∆ I , L m I ) (2.12) Here the relevant line bundle L m I is given by with q I (r m I ) the U (1) I charge of representation r m I . The second and third line count chiral and, respectively, anti-chiral N = 1 multiplets in representation r m I . The first and fourth line vanish for supersymmetric fluxes [80] and can hence be discarded. The chiral index resulting from this spectrum is (2.14)
Localised Matter and The Definition of The Spin Bundle
Consider next the massless matter localised on a curve C R , and denote by L R ∈ Pic(C R ) the local gauge background to which such matter in the twisted theory on R 1,3 × C R is coupled. The number of massless matter multiplets in representation R is given by the dimensions of the cohomology groups [7,8] H with i = 0 and i = 1 counting chiral and anti-chiral N = 1 multiplets, respectively. It has already been asserted in [22] that the correct choice of spin bundle K C R appearing in (2.15) is the one induced by the holomorphic embedding of C R into the base B 3 , in the following sense: Let D a , D b ∈ Cl(B 3 ) and C = D a ∩ D b ⊆ B 3 a curve, then the adjunction formula for C and D a gives The line bundle M ∈ Pic(B 3 ) is uniquely determined by its first Chern class since b 1 (B 3 ) = 0. The latter is necessary for an elliptically fibred Calabi-Yau 4-fold Y 4 ։ B 3 to exist, for which b 1 (Y 4 ) = 0. If both D a and D b are spin 11 , c 1 (M ) is an even class in H 2 (B 3 , Z) and √ M is the unique line bundle on B 3 with first Chern class 1 2 c 1 (M ) ∈ H 2 (B 3 , Z). The spin structure to take in (2.15) is then If D a or D b are not spin, then the Freed-Witten quantisation condition ensures that L R ⊗ K C R can be split up as a product of two integral bundles L 1 ⊗ L 2 such that L 2 is a product of line bundles obtained as the pullback of well-defined bundles on B 3 . The pullback bundles underlying L 2 include the contribution from the spin structure.
From C 3 -Backgrounds to Line Bundles
Our next task is now to extract the line bundles appearing in the cohomologies (2.12) and (2.15) from the gauge background on the globally defined 4-fold Y 4 . The general idea is that the matter states are associated with the quantised moduli space of M2-branes [25] wrapping suitable components of the fibre, as reviewed in Section 2.
1. An M2-brane couples in a standard way to the 3-form background via the Chern-Simons action S CS = 2π M2 C 3 . We therefore need to integrate the 3-form gauge background over the fibral curve wrapped by the M2-brane associated with a given state. The resulting object then describes the Abelian gauge background to which the M2-brane excitations along the base couple.
The formalism of Chow groups allows us to perform this operation of integration over the fibre in a manner which is guaranteed to keep the full amount of information about the gauge background [22]. Indeed, by using intersection theory within the Chow ring we are able to extract a line bundle either on the 7-brane ∆ I or on the matter curve C R on the base. The compatibility of the refined cycle map (2.6) with intersection within the Chow ring and with pushforward under the fibration π ensures that this procedure correctly recovers the information both about the first Chern class and about the flat holonomies of the line bundle on the base.
Localised Matter
For localised matter this formalism has already been carried out in [22]. Consider a matter surface S a R , given by a fibration of rational curves over the matter curve C R on the base B 3 . Furthermore fix a 2-cycle class A ∈ CH 2 ( Y 4 ) to represent the 3-form background as reviewed in Section 2.1. We can then form the intersection product S a R · ι R,a A, loosely speaking by pulling 11 This means there exists a Z-Cartier divisor ka such that 2ka is the canonical divisor on Da.
back A to S a R . The notation and details of this construction are explained in Appendix A.2. Since the pullback is compatible with rational equivalence and preserves codimension, we can view this as an object in CH 2 (S a R ), i.e. an element of the class of points on S a R . The operation of integration along the fibre motivated above corresponds to projection onto the base by considering the object Here π R,a denotes the projection from the surface S a R to C R . We have therefore obtained a Chow class of points on C R . Since on C R . By construction, this is the line bundle induced by the gauge background to which the quantum excitations of the M2-brane wrapping the fibre of S a R couple. If the gauge background preserves the non-Abelian gauge symmetry in the F-theory limit, then for fixed R this construction gives the same line bundle for each choice of S a R , a = 1, . . . , dim(R), and we can omit the index a. By comparison with (2.15), the massless chiral matter in representation R is counted by A formalisation of these steps based on an accurate application of the intersection product within the Chow ring has been given in [22] and is reviewed in Appendix A.2.
Note that (2.21) counts the exact massless matter spectrum modulo potential spacetime instanton corrections in F-theory: For instance, if the matter is charged only under an Abelian gauge group which acquires a Stückelberg mass, D3/M5-instantons can still generate nonperturbative mass terms which are exponentially suppressed by the Kähler moduli in the sense of [81][82][83][84][85]. These corrections are not accounted for by the topological field theory derivation of (2.21).
Bulk Matter
In a similar manner, we can now define the line bundles on a stack of 7-branes wrapping ∆ I to which the bulk matter couples. The fibre of Y 4 over a generic point on ∆ I takes the form of a connected sum of rational curves P 1 i I intersecting like the nodes of the affine Dynkin diagram of g I . Fibring P 1 i I over ∆ I defines the so-called resolution divisor E i I . Let us now consider the Chow class E i I ∈ CH 1 ( Y 4 ) associated with E i I , and fix a gauge background A ∈ CH 2 ( Y 4 ). Recall that pullback is well-defined for CH • ( Y 4 ). Therefore we can pull A back to E i I by considering the intersection product. This produces an element Integration over the fibre amounts to projection onto the base of E i I via the pushforward of π i I : E i I ։ ∆ I . This results in (2.22) On the complex 2-cycle ∆ I , we have that (2.23) We therefore identify as the line bundle on ∆ I induced by the gauge background with structure group U (1) i I . This process is to be repeated for all E i I , i I = 1, . . . , rk(g I ).
If the structure group U (1) I of the line bundle L I appearing in Section 2.2.1 is given by the linear combination L I = i I a i I U (1) i I , then (2.25)
Systematics of Gauge Backgrounds
We now systematically describe the representation of a gauge background by an element A ∈ CH 2 ( Y 4 ). As reviewed, the cohomology class The cohomology group H 2,2 ( Y 4 ) enjoys a decomposition into three orthogonal subspaces is the subspace of H 2,2 ( Y 4 ) obtained by variation of Hodge structure starting from the unique (4, 0) form [86]. Finally, H 2,2 rem ( Y 4 , R) denotes the remainder piece which is neither vertical nor horizontal [34].
We first revisit, in Section 3.1, the constraints on a consistent gauge background posed by gauge invariance and transversality, directly at the level of algebraic cycles rather than of cohomology. To systematise the geometric description of vertical gauge backgrounds we then introduce the notion of a matter surface flux in Section 3.2. In Section 3.3 we detail the intersection theoretic operations which extract from such backgrounds the sheaves whose cohomology groups count the massless matter.
Gauge Invariance and Transversality
The condition for a flux G 4 ∈ H 2,2 vert ( Y 4 ) not to break non-Abelian gauge symmetries in F-theory is typically formulated in the literature as However, this condition is not sufficient to ensure gauge invariance of a flux in H 2,2 rem ( Y 4 ): Due to the orthogonality of H 2,2 rem ( Y 4 ) and H 2,2 vert ( Y 4 ) any flux H 2,2 rem ( Y 4 ) satisfies (3.2) even though it might break the non-Abelian gauge group on a 7-brane, a prime example being the so-called hypercharge flux in F-theory models based on gauge group SU (5). Furthermore, we seek to find a condition not only for the gauge flux, but more generally for the Chow class A with [A] = G 4 representing the full gauge background.
From the perspective of the topologically twisted theory on the 7-brane ∆ I , the condition for gauge invariance is that the gauge bundle embedded into the structure group of the non-Abelian group should be the trivial bundle on ∆ I . Hence given an element A ∈ CH 2 ( Y 4 ) with [A] = G 4 the correct condition to impose is The condition for a 4-form flux G 4 ∈ H 2,2 ( Y 4 ) to descend to a well-defined gauge flux in F-theory is typically formulated as the transversality conditions and One possible derivation of these constraints is via the observation that they are equivalent to the absence of certain Chern-Simons-terms in the dual 3d M-theory vacuum of the form Here A α and its field strength refer to the h 1,1 (B 3 ) vector multiplets associated with the Kähler moduli of the base B 3 in the 3d N = 2 theory, and A 0 , F 0 refer to the vector multiplets associated with the Kaluza-Klein U (1) associated with circle reduction of the 4d F-theory to the 3d M-theory [30]. Now, since these Chern-Simons terms are not generated at one loop in the transition from the 4d F-theory to the 3d M-theory vacuum, and they do not either descend from classical terms in 4d upon circle reduction, they must be absent in the M-theory effective action in order for the 3d vacuum to lift to a Poincaré invariant F-theory vacuum [30].
While conceptually very clear, this derivation is only sensitive to the intersection product in cohomology. In particular, these conditions are again trivially satisfied by a gauge flux G 4 not in H 2,2 vert ( Y 4 ). An alternative derivation of condition (3.4) is to require that G 4 do not affect the chirality of states wrapping the full fibre as these correspond to the higher KK modes in Mtheory. Their chirality must therefore equal that of the zero-mode in order for the 4d F-theory vacuum to be Lorentz invariant. In particular, consider a matter surface S a (R) over a curve C R in the base. The surface S a n (R) is defined by adding to the fibral curves over C R n multiples of the full fibre F . M2-branes wrapping the fibre of S a n (R) correspond to the n-th KK state associated with the 4d F-theory multiplet with weight β a (R). The condition (3.4) guarantees that Hence the 3d chirality of the spectrum of KK modes with weight β a (R) equals that of the KK zero-mode in the 3d N = 2 M-theory vacuum. 12 However, once we specify the gauge background beyond its flux, we require not only that the 3d net chirality of KK modes should agree, but rather that the exact number of 3d KK states in representation R and R must match that of the zero modes. This is guaranteed if 13 Given the construction of S a n (R) described above, a sufficient condition generalizing (3.4) is to require that From a conceptual point of view this is therefore proposed as the condition replacing (3.4) at the level of Chow groups. It would be interesting to investigate if there exist gauge backgrounds whose associated flux satisfies (3.4) even though its underlying Chow class violates (3.8).
Less clear is the correct interpretation of (3.5) at the level of Chow groups. In view of (3.3) and (3.8), a natural generalisation would be to require that Again, we leave it for future investigations to determine if there are any non-trivial examples of gauge backgrounds which distinguish between (3.9) and (3.5). For the cycles considered in this article, both conditions lead to equivalent constraints.
Systematics of Vertical Gauge Backgrounds
We will focus in this article on gauge backgrounds whose associated G 4 flux lies in H 2,2 vert ( Y 4 , R), subject to the transversality conditions (2.7). 14 These can be classified, on a fibration over a generic base space B 3 , as follows: Matter Surface Flux Consider a matter surface and its associated element S a R ∈ CH 2 ( Y 4 ). By construction G 4 = [S a R ] satisfies (3.8) and (3.9). If we are interested in describing a gauge background which does not break the non-Abelian gauge algebra in the F-theory limit, we must modify the Chow class S a R by adding suitable terms of the form (3.10) Here E i I denotes the resolution divisors for the gauge algebra g I , and E j J | C R their restriction to the curve C R . Furthermore, C −1 i I j J = δ IJ C −1 i I j I with C −1 i I j I the inverse of the Cartan matrix of g I . The expression (3.10) is indeed gauge invariant because the intersection of any E i I with S a (R) in the fibre reproduces the entry β a (R) i I in the weight vector, and likewise intersection of E i I with the component E j J in the fibre reproduces the negative of the corresponding Cartan matrix entry.
One can convince oneself that the final result after adding the correction terms ∆ a (R) to S a R is independent of the index a = 1, . . . , dim(R), which is why A R carries no such index. We term gauge backgrounds of this form matter surface fluxes. As long as The first example of such a flux has been given, at the level of cohomology, in [26], and this approach to fluxes has been developed systematically in [27].
U(1) Flux A second type of vertical gauge background arises in the presence of extra independent rational sections S X . Via the Shioda map, each independent such section is associated with a divisor class U X ∈ CH 1 ( Y 4 ) such that C 3 = A A ∧ [U X ] + . . . defines a U (1) A gauge potential A X . Given any divisor class F ∈ CH 1 (B 3 ) the object defines a gauge background which is automatically vertical and respects the non-Abelian gauge algebras. At the level of fluxes these backgrounds have been introduced in [29,91] (see also [28]).
Cartan Flux Third, we can consider the gauge background for the Cartan U (1) i I by restricting the resolution divisor E i I to any curve C ⊆ ∆ I . This defines the element While automatically vertical, this gauge background clearly breaks the gauge algebra g I . If the curve class More generally, the curve class [C] ∈ H 2 (∆ I ) might contain components trivial in H 2 (B 3 ), but this still defines a valid flux [90,92]. In this case, . Let us stress again that these three types of vertical fluxes and their underlying elements in CH 2 ( Y 4 ) are the ones which exist for a generic choice of base B 3 . However, they are not all independent due to a number of non-trivial relations within the Chow group, which descend to corresponding relations in H 2,2 ( Y 4 ) via the cycle map. In particular, in [39] we prove that anomaly cancellation implies that the matter surface fluxes satisfy a set of linear relations in H 2,2 ( Y 4 ), thereby generalising previous observations in [38]. We furthermore exemplify that this relation holds at the level of the Chow group, and conjecture this to hold true more generally.
An equivalent approach to classifying gauge backgrounds with G 4 ∈ H 2,2 vert ( Y 4 ) is by systematically forming all intersections of two elements in CH 1 ( Y 4 ) and determining the linearly independent combinations whose cohomology classes satisfy verticality and gauge invariance. This approach requires finding a basis of H 2,2 vert ( Y 4 ) by analysing the relations in the intersection ring. The first systematic such classification for F-theory fibrations over general base has been carried out in [31]. This approach is widely used in the literature, including [29,30,32,33,35,36,38].
Intersection Theory for Vertical Gauge Backgrounds
We now describe how to perform the programme outlined in section Section 2.3 of extracting the line bundle from the various types of gauge backgrounds. The advantage of representing the gauge background by the Chow group class of an algebraic cycle is that transverse intersection products can be evaluated very intuitively. Such transverse intersections factorise into a piece in the fibre and a piece in the base, and the projection onto the base is easily evaluated. Nontransverse intersections, on the other hand, can be expressed as sums or differences of transverse intersections by making use of the relations within the Chow group between the Chow classes representing the gauge background. It is here where the formalism of Section 2.1 becomes most crucial: The fact that the refined cycle map (2.6) is defined at the level of Chow groups guarantees that using such relations within CH 2 ( Y 4 ) does not alter the gauge background and is hence permissible in evaluating the intersections.
For the matter surface fluxes of the form (3.10), we therefore proceed as follows: 1. Matter surface flux: Consider a matter curve C R ⊆ B 3 for some representation R, a matter surface S a ( R) and the associated matter surface flux A( R) ∈ CH 2 ( Y 4 ). It is given by a formal linear sum of P 1 -fibrations over C R , depicted in green colour in Figure 1.
State:
Next we consider a state over the matter curve C R . Such a state is encoded by a matter surface S a R , whose fibre structure in Figure 1 is indicated in red. 3. 'Intersection' of S a R and A( R): We assume that the curves C R and C R intersect transversely in B 3 . We discuss the case of non-transverse intersections below. For simplicity we assume in Figure 1 that the two curves intersect in one point I only, with multiplicity m(C R ∩ C R , I). 16 From knowledge of the splittings in the fibre we have Consequently we can compute the intersection number n f in the fibre over I as (3.14) We can therefore consider the divisor (3.15) and its associated line bundle Then the sheaf cohomologies count the massless excitations of the state S a R in the presence of A( R).
'Non-Transverse-Intersections':
If the curves C R and C R do not intersect transversely, then we can use linear relations among the Chow classes representing the various gauge backgrounds to exchange the specific A( R) under consideration by a linear combination of other A( R), for which the relevant intersections are indeed transverse. We will show this in more detail in an example in Section 5.1. For a U (1) X gauge background described by A X (F ) = F · U X , a very similar logic has already been applied in [22] to deduce the relevant line bundles on the base. Consider a state in representation R with U (1) X charge q X (R). This state is associated with a matter surface S R over a curve C R on the base B 3 . 17 We are also introducing the notation for the embedding of the curve C R into B 3 . Then the line bundle on C R to which the zero modes couple is given by 19) and the number of zero modes is counted by the dimensions of H i (C R , L(S R , A X (F ))⊗ K C R ). This result follows from the fact that, as in the construction above, the intersection between the gauge background A X (F ) and the matter surface S R factors into an intersection in the fibre and in the base. Projecting onto C R gives a multiplicity, which by construction of U X is exactly the U (1) X charge q X (R). The term in brackets in (3.19) describes the intersection on the base.
As a somewhat special case this reasoning can be applied to the Cartan gauge backgrounds (3.12), where we now invoke the embedding of the matter curve ι C R : C R ֒→ ∆ I directly into the divisor ∆ I wrapped by the 7-brane stack to obtain Here the weight β a (R) i I is of course by definition the charge of the state under the Cartan U (1) i I .
We have focused here on the geometric interpretation of the Chow group elements representing the gauge background, for instance as a matter surface flux. Equivalently, one can evaluate the intersection theoretic pairing if one explicitly presents the vertical gauge data via the intersection of two elements in CH 1 ( Y 4 ), as remarked at the end of the previous section. The computations within the Chow ring on Y 4 are significantly simplified if it is possible to express these intersections as the pullback of intersections of elements of CH 1 ( X Σ ) of a toric ambient space X Σ of Y 4 . As stressed after (2.6) and further in Appendix A, in this case we can use homological relations on X Σ to evaluate the intersection product. We will encounter this explicitly in the concrete examples studied in the remainder of this article.
F-Theory GUT-Models with SU(5) × U(1) X -Symmetry
Our next goal is to demonstrate our formalism of computing the line bundles and their sheaf cohomology groups in an explicit example. We shall design the example to be as simple as possible while at the same time exhibiting all ingredients of our general discussion so far. In order to exemplify the role of Abelian fluxes of type (3.11), we need at least one extra rational section. The arguably simplest type of such fibrations is obtained by a U (1) restricted Tate model [91]. To study the behaviour of massless matter charged also under a non-Abelian gauge algebra, we model an extra non-Abelian gauge factor, the simplest class being represented by Tate models with an I n singularity. The existence of additional non-trivial vertical gauge fluxes in such models requires the non-Abelian gauge group to be at least SU (5) [31]. In this sense a U (1) restricted Tate model with gauge group G = SU (5) × U (1) X is indeed minimal. The top describing the resolved fibre of this model was originally introduced in [29]. We will now provide a brief review on the topic. The interested reader is referred to the above references for further details.
SU(5) × U(1) X Fibration
Our starting point is a Tate model over a smooth base B 3 , described as the vanishing locus V (P T ) of a hypersurface equation in an ambient space X 5 . This ambient space X 5 is given by a fibration of P 2,3,1 over B 3 . The homogeneous coordinates on each P 2,3,1 fibre are [x : y : z] and the a i are sections of the i-th power of the anti-canonical bundle K B 3 of the base. We are considering fibrations over base spaces B 3 compatible with Y 4 being Calabi-Yau. In particular this implies H 1 (B 3 , Q) = 0.
In addition, we are restricting ourselves in the sequel to B 3 without torsional 1-cycles, i.e. H 1 (B 3 , Z) = 0. By this assumption, the divisor class group on B 3 coincides with H 1,1 (B 3 ).
An SU (5) × U (1) X gauge symmetry is engineered by specialising the sections a i further to a 1 = a 1 , a 2 = a 2,1 w, a 3 = a 3,2 w 2 , a 4 = a 4,3 w 3 , a 6 = 0 . e 0 e 1 e 2 e 3 e 4 x y z s The discriminant component associated with the SU (5) gauge group is and setting a 6 ≡ 0 implements the extra section associated with U (1) X . To resolve the singularities one introduces blowup coordinates e i , i = 1, . . . , 4 and s. The coordinate s can be associated with the second section of this fibration [29]. The resolved fibration π : Y 4 ։ B 3 can be described as hypersurface V (P ′ T ) ⊆ X 5 , where X 5 is a new ambient space and P ′ T the so-called proper transform of the Tate polynomial. Explicitly it is given by where due to the blow-ups π * w = e 0 e 1 e 2 e 3 e 4 . The resolved fibre ambient space is itself toric with toric coordinates and weights as given in Table 4 which corresponds to resolution T 11 in [29].
We will denote by E i ∈ CH 1 ( Y 4 ) the class of algebraic cycles rationally equivalent to the vanishing locus V (e i ) ⊆ X 5 . 18 These classes correspond to the generators of the Cartan U (1) symmetries of SU (5). We use S and Z to denote the classes of the extra rational section V (s) and zero section V (z) in CH 1 ( Y 4 ), respectively. These allow us to express the generator of the U (1) X gauge symmetry as Matter in representations 10 1 , 5 3 , 5 −2 and 1 5 localises on the following curves on B 3 , (4.7) 18 We apply similar notations for the vanishing loci associated to the other homogeneous coordinates of the top.
The subscripts denote the charges of the respective SU (5) representations under the Abelian gauge group factor U (1) X .
The matter surfaces S a (R) ∈ CH 2 ( Y 4 ) over these matter curves can be obtained by analysing the fibre structure of π : Y 4 ։ B 3 [29,31]. We briefly review this subject in Appendix B. In particular note that the U (1) X -charge of a state encoded by matter surface S a (R) is encoded in the intersection with U X in the fibre.
For explicit computation, it will be crucial to make use of the so-called linear relations of the SU (5) × U (1) X -top. There are three generators for these relations, namely Here X ∈ CH 1 ( X 5 ) is the class of algebraic cycles rationally equivalent to V (x) ⊆ X 5 , and similarly for the other divisors. In particular we have
(Self-)Intersections of Matter Curves
To evaluate expressions such as (3.15) we need the (self-)intersections of the matter curves C 10 1 , In anticipation of this application, let us derive the relevant results in this section.
Two curves C i , C j ⊆ W can be regarded as divisors on W , i.e. elements of CH 1 (W ), and hence their intersection product is a special case of the general intersection product within the Chow ring reviewed in Appendix A.2. Since it is clear that we will be using the canonical embedding ι C i : C i ֒→ W , we will abbreviate C i · ι C i C j as simply C i · C j and interpret this as the pullback of C j to C i , i.e. as an element in CH 0 (C i ) ≃ CH 1 (C i ) given by Here N ∈ N is a non-negative integer, m k ∈ Z and p k ∈ C j are points. To this element we associate the line bundle O W (C j )| C j , and the intersection number |C i · C j | is simply [93,94] |C In this sense, the intersection D = C i · C j occurs over the points p k with multiplicities m k . Note however that this notion of 'intersection points' is only well-defined up to linear equivalence of divisors -pick a divisor D ′ = D but with D ′ ∼ D, then D ′ will denote different intersection points with different multiplicities.
Bearing this in mind, let us work out intersection points and their associated multiplicities for the intersections of the matter curves C 101 , C 5 3 and C 5 −2 . We first define Figure 2: Schematic picture of the intersection properties of the matter curves C 10 1 , C 5 3 , C 5 −2 and C 1 5 in the base space B 3 . We indicate the self-intersection numbers of the matter curves in W by the tiny coloured numbers attached to Y i . The outer box is to represent the base space B 3 .
Then in view of the explicit realisation (4.7) of the curves we find for the transverse intersections
(4.12)
This structure is depicted schematically in Figure 2. Note that the loci Y i are in general reducible and consist of the following number of points, (4.13) For the self-intersections we proceed very similarly. Let us start with C 10 1 and note that (4.14) Consequently, up to linear equlvalence, C 10 1 self-intersects at Y 1 , Y 2 with multiplicites +2 and Table 4.2: Intersection points and intersection multiplicites (up to linear equivalence) of the matter curves C 10 1 , (4.16) The above manipulations manifestly involve rational coefficients. This is correct by our assumption that B 3 does not contain torsional divisors as explained at the beginning of Section 4.1.
Irrespective of the appearance of rational coefficients, note that by construction the line bundles here are integer quantised. We summarise our findings in Table 4.2.
Vertical and Gauge Invariant Matter Surface Fluxes over Matter Curves
We now turn to constructing the matter surface fluxes (3.10) over the matter curves C 10 1 , C 5 3 , C 5 −2 and C 1 5 , beginning with the flux associated to one of the matter surfaces S a (10 1 ). The 2-cycle A(10 1 ) obtained by subtracting suitable correction terms will be a linear combination of rational curves fibred over C 10 1 . These rational curves arise from the splitting of the fibres of the resolution divisors E i ⊆ Y 4 , i = 1, . . . 4, over C 10 1 . For example, the fibre of E 2 splits into two rational curves P 1 24 and P 1 2B . The 2-cycles obtained by fibring these over C 10 1 will be denoted by P 1 24 (10 1 ) and P 1 2B (10 1 ), respectively. An overview of such fibral curves and their associated 2-cycles is given in Appendix B. More details can be found in [29,31].
It is not too hard to repeat this analysis for matter surface fluxes over C 5 3 and C 5 −2 . We list the relevant P 1 -fibrations and the corresponding short-hand notations in Appendix B. This enables us to state the result as (4.21) using analogous notation for the respective fibral curves in the order listed in (B.17) and (B.23). By similar arguments it can be shown that is both vertical and gauge invariant. 20 19 The fractions 1 5 originate from the inverse of the Cartan matrix C. 20 The fluxes described above are the ones which exist generically for every choice of B3. In addition, there can be extra gauge invariant matter surface fluxes if B3 has special properties, for instance if the matter curves are forced to split.
As remarked at the very end of Section 3.3 it is sometimes convenient to express these cycles as elements A ∈ CH 2 ( X 5 ) such that A| Y 4 = A. These take the form (4.23) In writing down the equality for the flux A (5 −2 ) (λ) we have used that on X 5 Also note that the U (1) X -flux -as introduced in [29,31] -can be expressed in terms of X 5 as 21
Line Bundles Induced by Matter Surface Fluxes
In this section we take the first step towards computing the massless spectra in the presence of gauge backgrounds associated with the matter surface fluxes derived in Section 4.3. Following our general formalism we have to compute the intersection product of the 2-cycle classes defining the gauge background with the relevant matter surfaces, and then project the result onto the matter curves in question. There are two ways to perform these computations in practice. We begin, in Section 5.1, with the first approach, which has been outlined in Section 3.3. Equivalently, though computationally more involved, we can perform the intersection computations with the help of the presentations (4.23) of the gauge data as pullbacks from the ambient space. In Section 5.2 we will demonstrate this approach for the U (1) X gauge background and we present the detailed computation for all matter surface fluxes in Appendix C. The two approaches match precisely.
Massless Spectrum of A(10 1 )(λ) via Transverse Intersections
To derive the massless spectrum for the gauge background A(10 1 )(λ), we first notice that the matter curve C 10 1 intersects the matter curves C 5 3 , C 5 −2 and C 1 5 transversely. Hence to compute the massless spectra induced by A(10 1 )(λ) of states localised on the latter three matter surfaces, we can directly apply (3.15). The transverse intersection numbers in the base have already been computed in (4.12), and it merely remains to determine the intersection numbers in the fibre over these intersection points. These intersection numbers are computed in Appendix B and specifically listed in Appendix B.3. As explained in more detail therein, due to a seeming Z 2 -orbifold singularity in the top over the Yukawa locus Y 1 (despite Y 4 being smooth) some of the intersection numbers of the rational curves in the top over the locus Y 1 are in fact fractional, see (B.36). By use of the information displayed in Appendix B.3 the intersection products (4.23) eventually take the form 22 As an example consider the intersection in the second line. The 2-cycle A(10 1 )(λ) is given explicitly in (4.19) in terms of P 1 -fibrations over the curve C 10 1 . As for S 5 −2 , since the gauge background respects the SU (5) gauge symmetry, we can pick a matter surface for any of the five states in 5 −2 as listed in (B.28) of Appendix B.2. For instance, take S (4) . Now, from (4.12) we read off that C 10 1 and C 5 −2X intersect over the two point sets Y 1 and Y 2 . We hence need to study the splitting of the fibres of A(10 1 )(λ) and of S The only non-zero intersection numbers between the fibral curves involved, as tabulated in (B.36), are These are to be viewed as intersection numbers between two rational curves in the complex two dimensional top over Y 1 . Summing everything up explains the term proportional to Y 1 in the second line of (5.1). A similar analysis is to be performed over Y 2 .
The element 2λ defines a line bundle on C 5 −2 given by This line bundle has the property that it cannot be obtained by restriction of another line bundle on W to the curve C 5 −2 . This is equivalent to the statement that the divisor 2λ 5 Y 2 − 3λ 5 Y 1 does not arise as a complete intersection of C 5 −2 with another divisor on W . This will be important when it comes to evaluating the sheaf cohomology groups associated to this line bundle counting the massless matter states on C 5 −2 .
The computation of π * (S 10 1 · A(10 1 )(λ)) is more involved due to the self-intersection of C 10 1 . However, as pointed out before, there exist non-trivial linear relations between the 2-cycle representing the gauge backgrounds which allow us to treat non-transverse intersections of this type for a linear combination of transverse ones. In [39] we explicitly show that in the model at hand these relations take the form These are the manifestation of a more general set of relations between 2-cycle classes which in fact follow, at a general level, from absence of gauge and gravitational anomalies in F-theory [39]. With the help of the first relation, it is readily verified that 23 where the two transverse intersections appearing in the first line are computed analogously and the result is tabulated in Table 5.2. This table also contains all other intersections between 2-cycles for matter surface backgrounds and matter surfaces.
Another straightforward application of this formalism is a derivation of the line bundle induced by a Cartan flux on the matter curves. Such a gauge background can be expressed as which automatically satisfies the transversality conditions (3.8) and (3.9) but of course violates, by construction, (3.3). Here C ∈ CH 1 (W ) denotes any curve on the 7-brane divisor. For instance, the hypercharge flux takes the form [90,92,95] A The intersections in the fibre of A (C) | C R with the matter surfaces S a R are readily worked out from the results of Appendix B for the representations present in this model and explicitly confirm the result (3.20). 23 The first relation in (5.5) is equivalent to P 1 3x (53) + P 1 3G (5−2) = P 1 24 (101) ∈ CH 2 ( Y4). Alternatively, this enables us to rewrite the relevant matter surface such that the intersection in question is given as sum of two transverse intersections, leading to the same result.
Tensoring with the respective spin bundles gives the line bundles whose cohomologies count the massless matter. Table 5.2: Chiralities of the massless spectra of the fluxes A X (F ), A (10 1 ), A (5 3 ), A (5 −2 ) and A (1 5 ). These chiralities include the contributions from the spin bundle.
Projection via Restriction from Ambient Space
In this section we exemplify the explicit evaluation of the projection formula (2.18) using a different method. It is based on the representation of the 2-cycles describing the gauge background as the pullback of elements of CH 2 ( X 5 ) with X 5 the ambient space of Y 4 . Let us make this concrete for the U (1) X -background A X (F ), postponing the analogous computation for the other types of gauge backgrounds to Appendix C. In Section 4.3 we pointed out that this background can be described by restriction to Y 4 of Our task is to compute the intersections S a R · ι R A X (F ) for the matter curves C 10 1 , C 5 3 , C 5 −2 and C 1 5 . We do so by interpreting also S a R as an element S a R ∈ CH 3 ( X 5 ) and working entirely on 43 e 0 e 1 e 2 xz − a 32 y, a 21 e 0 xze 1 e 2 − a 10 y) Let us compute P 1 4D (10 1 ) · ι 10 1 A X (F ) term by term by first determining the vanishing ideal representing the intersection points in X 5 . In determining this ideal, we are free to use the relations in CH( X 5 ) without changing the result up to rational equivalence. The key point is now that, as far as the toric fibre ambient space is concerned, rational and homological equivalence agree. We are therefore free to use the linear relations (4.8) and the relations encoded in the SR-ideal (4.5) of the ambient space. Furthermore, in the following f will denote a polynomial in the coordinate ring of X 5 in the same Chow class as F. With this in mind, we find 5 3 is given as P 1 3F (53). Since P 1 3 splits into P 1 3x and P 1 3F over C5 3 and since we only consider intersections with 2-cycles representing gauge invariant backgrounds here, we can represent the matter state in this way. matter curve Table 5.3: The massless spectrum of states localised on the various matter curves in the presence of the flux A X (F ) [29,31] with F ∈ Pic(B 3 ) is counted by the sheaf cohomologies of the line bundles listed above.
In the fourth line we used the linear relation E 4 = E 1 + 2E 2 + S + 3X − 2Y, and for the last line Concerning the vanishing locus V (a 1,0 , f, e 2 , e 4 , a 2,1 ), note that e 2 = 0 implies that this is a sublocus of the fibration over the GUT-surface. Inside B 3 this locus is described by the intersection of the divisors F (associated to f ), K B 3 and 2K B 3 − W inside this surface. This intersection is the empty set. Therefore, we can discard this vanishing locus. Along the same lines we can discard V (a 1,0 , f, e 3 , e 4 , a 2,1 ).
Summing up all contributions we obtain Upon use of the projection π 10 1 : Y 4 | C 10 1 ։ C 10 1 this yields It is a simple but teadious exercise to repeat this type of computation for the other matter surfaces with the result Consequently the massless spectrum of the flux A X (F ) is counted by the sheaf cohomology groups of the line bundles listed in Table 5.3.
Computing Massless Spectra of Matter Surface Fluxes in an F-theory Model
In this final section we compute the cohomology dimensions of the line bundles on the matter curves deduced previously. If the base space B 3 is embedded into a toric variety X Σ , we can interpret these line bundles as coherent sheaves on X Σ . The computation of the massless spectrum then reduces to the computation of sheaf cohomology groups/dimensions on toric spaces. This mathematical problem has been investigated in great detail by M. Barakat and collaborators [40][41][42][43][44][45], whose technology we adapt for our purposes. This requires making a concrete choice of a base space B 3 and 7-brane divisor W therein, i.e. fixing the complex structure moduli defining the fibration.
In Section 6.1 we outline the algorithm for the computation of the sheaf cohomology groups/dimensions on X Σ . Our algorithms are implemented in gap [46] and phrased in the language of categorical programming of CAP [47][48][49]. For the reader's convenience, we provide more introductory material on this rather mathematical topic in Appendix D. We describe our definition of the concrete model on the toric base B 3 = P 3 in Section 6.2.
In Section 6.3 we finally demonstrate our computations for different choices of complex structure moduli of the matter curves. Thereby we observe jumps in the cohomology dimensions across the moduli space. Such phenomenon is of relevance when it comes, for instance, to scanning for Standard-Model-like spectra in F-theory compactifications.
Computing Sheaf Cohomologies with GAP and CAP
Our results so far imply that we need to compute sheaf cohomologies of line bundles on curves C R which in general cannot be obtained as pullback line bundles from the 7-brane divisor W or any toric ambient space. For instance, this is the case for the line bundle induced by the gauge background A(10 1 ) on C 5 −2 , i.e.
∀L ∈ Pic (W ) : A similar obstruction exists for the hypercharge gauge background in F-theory GUT-Models [90,96] on the divisor W , which cannot be obtained as a pullback bundle from the base. For line bundles on C R obtained by pullback, the computation of the associated sheaf cohomologies can oftentimes be performed with cohomCalg [97][98][99][100][101] or techniques employed in heterotic string compactifications for CICYs [102]. Given the above obstruction, these methods are, however, not applicable in our case and we hence need to go beyond this framework.
Even though L(S R , A) does in general not descend from a line bundle on an ambient space, we can extend this line bundle L(S R , A) by zero outside of C R . The so-obtained object is a coherent sheaf F(S R , A) on the space into which C R is embedded. In case C R is embedded into a toric ambient space X Σ , we thus obtain elements in Coh(X Σ ), the category of coherent sheaves on the toric ambient space X Σ . In this sense, the remaining task is to compute sheaf cohomology groups for such objects.
Our methods to compute the sheaf cohomologies for all elements of Coh(X Σ ) are based on [40][41][42][43][44][45]. These methods, which we are extending further, apply as long as X Σ is a normal toric variety which is either smooth and complete or simplicial and projective. Note that Coh(X Σ ) includes the above mentioned non-pullback line bundles and the hypercharge flux, but is far bigger than that. Also vector bundles which are not direct sums of lines bundles, quotients thereof, T-branes in the language of [103] or skyscraper sheaves can be modelled by our technology. In particular smoothness of the matter curves is not required.
In Appendix D we briefly review topics from algebra, category theory and algebraic geometry which are necessary to understand our approach to computing sheaf cohomologies of coherent sheaves. Experts may well skip these sections, interested readers can find additional information in [94,104,105].
Let us return to our task of computing the sheaf cohomology groups of the line bundle L(S R , A) defined, as in (6.1), via a divisor Here K C R denotes, by slight abuse of notation, the divisor on C R associated with the spin bundle induced by the embedding of the curve C R into B 3 . It is well-known that a Cartier divisor D on a complex variety X gives rise to a line bundle O X (D), and we take this opportunity to recall how this line bundle is actually defined. Namely O X (D) can be understood as a sheaf on X. 25 A sheaf F (of Abelian groups) on X assigns to every open subset U ⊆ X an Abelian group F(U ). For the line bundle O X (D) this Abelian group is given by where div(f ) denotes the divisor associated with the not identically vanishing meromorphic This definition is rather abstract. In addition it is not at all obvious at this stage how we could actually encode this data in a form understandable for computers. To bridge this gap, let us also recall the notion of a so-called ideal sheaf. To this end we first look at the sheaf of holomorphic functions O X on a complex variety X, which assigns to every open subset U ⊆ X the set O X (U ) = {f : U → C , f holomorphic}. It turns out that O X (U ) is a (commutative and unitial) ring. Now let us consider global sections f i ∈ H 0 (X, O X (D i )) for suitable divisors D i ∈ Cl(X) and 1 ≤ i ≤ n. In addition let U = {U j } j∈J be an affine open cover of X.
Therefore these global sections f i cut out an analytic subvariety Y ⊆ X given by . This assignment of ideals forms a sheaf on U j . Finally, these sheaves on the affine patches U j glue to form a sheaf on X -the ideal sheaf Now let us look at a Cartier divisor D ⊆ C R . We assume that D = V (f 1 , . . . , f n ) for global sections f 1 , . . . , f n . 27 We can then wonder if there is a relation between the ideal sheaf 25 If the divisor D is Cartier, the sheaf OX(D) is invertible, which means that it actually defines a line bundle.
Equivalently, there exists another sheaf on X -in the case at hand OX (−D) -which satisfies the property OX (D) ⊗O X OX (−D) ∼ = OX , hence the name invertible. We will exploit this invertibiliy momentarily. Recall also that on a smooth variety every Weil divisor is Cartier. 26 More generally, one can use any closed embedding ι : Y ֒→ X to define the morphism of sheaves ι ♯ : OX → ι * OY .
The kernel of ι ♯ is then termed the ideal sheaf of Y in X. 27 The divisor D need not be a complete intersection. Consequently there is no contradiction between D being of codimension 1 and D being cut out by more than one global section.
. . , f n ) and the line bundle O C R (D). And indeed, by proposition 6.18 of [105], Up to an important −1, this brings us close to handling the line bundles O C R (D).
To overcome this additional −1, let us recall that line bundles O C R (D) are invertible sheaves. This means that there exists another sheaf F on We can describe this sheaf quite explicitly. Namely we consider all sheaf homomorphisms from the sheaf O C R to the line bundle O C R (D). It is a well-known fact that these homomorphisms form a sheaf, the so-called sheaf-Hom . Therefore we reach the conclusion This formula connects the defining data of the divisor to the line bundle L(S R , A) far more explicitly than e.g. (6.4). But we can still do better.
To achieve an even more explicit description, we now turn our attention to toric varieties X Σ without torus factor. As we are eventually interested in algorithms applicable to computers, we use this opportunity to also let go of analytic geometry and model these varieties X Σ over the rational numbers Q. In particular we then employ the language of algebraic geometry to describe these toric varieties X Σ . A brief review of the topic is given in Appendix D.
Recall that X Σ comes equipped with a coordinate ring S -typically refered to as the Cox ring -which is graded by Cl(X Σ ). The assumption that X Σ has no torus factor ensures the existence of the so-called sheafification functor This functor turns a so-called finitely presented (f.p.) graded S-module into a coherent sheaf on X Σ . Homogeneous ideals I ⊆ S are special examples of f.p. graded S-modules. Hence for homogeneous polynomials f 1 , . . . , f n ∈ S, we can turn the ideal I = f 1 , . . . , f n ⊆ S into a coherent sheaf I on X Σ . And indeed I ∼ = I X Σ (f 1 , . . . , f n ), i.e. the sheafification of the ideal I provides nothing but the ideal sheaf on X Σ generated by f 1 , . . . , f n . In this sense, our next best model for the sheaf I X Σ (f 1 , . . . , f n ) is the ideal f 1 , . . . , f n itself, which provides a very explicit description for this sheaf.
For the representation of the ideal it turns out more practical to specify the relations satisfied by its generators than specifying them. Such relations are conveniently expressed as a linear map M acting on (finite) direct sums of modules over S (respecting the grading, which is explained further in Appendix D.1). Such a homomorphism M is a finitely presented (f.p.) graded S-module, in the sense defined in Appendix D.2.
The rough idea behind the functor (6.8) is then the following. The toric variety X Σ is defined by the combinatorics of a fan Σ. For every cone σ ∈ Σ there is an affine patch U σ and a monomial x σ ∈ S. Given an f.p. graded S-module M , we can perform a so-called homogeneous localisation of M with respect to x σ . This is explained in Appendix D.4. The result is an f.p.
We have thus obtained a coherent sheaf on every affine patch U σ of X Σ . It turns out that these sheaves glue together to form a sheaf on the entire variety X Σ .
For a divisor D = V (f 1 , . . . , f n ) ⊆ X Σ , we see from (6.7) that we need to invert the sheaf I associated with I = f 1 , . . . , f n to describe O X (D). We are thus looking for an analogue of this equation in terms of f.p. graded S-modules. Motivated by the fact S ∼ = O X , which can of course be proven rigorously, the analogue in question is M = Hom S (I, S) (c.f. Appendix D.11).
The so-defined f.p. graded S-module M now satisfies M ∼ = O X Σ (D). We provide more details in Appendix D. 10.
Finally note that in general the matter curve C R is not a divisor in a toric ambient space X Σ but of higher codimension. Suppose therefore that C R = V (g 1 , . . . , g k ) and D = V (f 1 , . . . , f n ) ⊆ C R for homogeneous polynomials g i , f i ∈ S. Then we can consider the graded ring S(C R ) = S/ g 1 , . . . , g k and construct from f 1 , . . . , f n an f.p. graded S(C R )-module M C R such that [106][107][108][109] provide the implementation of the category S-fpgrmod in the language of categorical programming of CAP [47][48][49]. Basic functionality of toric varieties is provided by the gap-package ToricVarieties of [110]. This package is extended by [111], which provides the algorithms that identify the above-mentioned ideal I. Also the specialised algorithms for the computation of Ext i S (I, M ) 0 are provided by [111]. The overall procedure is summarised compactly in Figure 3.
This toric space X Σ is not smooth, but rather a toric orbifold. The elliptic fourfold is embedded as (6.11) The section a i,j are taken according to (4) and W = V (z 4 ). In particular Y 4 is smooth for generic such sections, and the matter curves contained in S GUT can be described in terms of these generic polynomials as From the computational point of view, this toy model has a very appealing feature -the GUT-surface W is itself a toric variety. In such a situation it is always favourable to apply the tools provided by gap [46] to this toric GUT-surface directly, rather than describing it as a subvariety in B 3 . In particular we can model the matter curves in W ∼ = P 2 Q with homogeneous coordinates [z 1 : z 2 : z 3 ] by use of the homogeneous polynomials (6.13) as the loci 0 a 4,3 − a 2,1 a 3,2 ) . 4 35 15 7 120 36 10 286 66 13 560 105 To appreciate to what degree the defining polynomials for the matter curves simplify upon restriction to W we compare the number of polynomials defining a i,j and their restrictions a i,j displayed in Table 6.2. In particular, whilst a 4,3 ∈ H 0 (P 3 Q , O P 3 Q (13)) generically consists of 560 monomials, the corresponding a 4,3 ∈ H 0 (P 2 Q , O P 2 Q (13)) merely consists of 105 monomials.
With this preparation we now turn to the actual quantities to evaluate. For instance, the massless spectrum on C 5 −2 induced by the gauge background A(10 1 )(λ) is counted by the sheaf cohomologies of the line bundle As outlined in the previous section, with the help of gap [46] we can in principle compute an f.p. graded S(P 2 Q )-module which sheafifies to L on C 5 −2 . The question whether this also works in practice strongly depends on the complexity of the involved polynomials a i,j . Although the restriction from B 3 = P 3 Q to W ∼ = P 2 Q removes many moduli from the polynomials a i,j , we are still left with a huge polynomial a 1,0 a 4,3 − a 3,2 a 2,1 . For such a big polynomial, the currently available Gröbner basis algorithms come to their limits, which means that for such big polynomials defining the matter curve C 5 −2 we are in practice unable to compute the f.p. graded S(P 2 Q )-module which sheafifies to give the above line bundle. To overcome this shortcoming, we will compute the massless spectrum for non-generic matter curves instead. In our first example, we pick with c i ∈ N >0 (pseudo-)random integers. Then the discriminant ∆ of P T can be expanded in terms of the GUT-coordinate w as ∆ = 16a 4 1,0 a 3,2 (−a 2,1 a 3,2 + a 1,0 a 4,3 ) w 5 + 16a 2 1,0 −8a 2 2,1 a 2 3,2 + 8a 1,0 a 2,1 a 3,2 a 4,3 + a 1,0 a 3 3,2 + a 1,0 a 2 4,3 where for simplicity we have not written out the a i,j explicitly. No further factorisation occurs, and hence this choice of non-generic matter curves still leaves us with a SU (5) × U (1) X -gauge theory. The curve C 5 −2 is given by This curve C 5 −2 is not smooth. Let us therefore emphasize again that the techniques implemented in gap [46] are not limited to generic or smooth matter curves. In fact we are able to handle just about any subvariety of smooth and complete toric varieties, provided its defining polynomials are of reasonable size so that the currently available Gröbner basis algorithms terminate in a timely fashion. We will have far more to say about this in [112].
Massless Spectrum on Non-Generic Matter Curves
As an example consider the gauge background with H ∈ Cl P 3 Q the hyperplane class on P 3 Q . This gauge background can be checked to satisfy the quantisation condition (2.8). This follows already from the analysis in [31] around equ. (3.18) therein. The massless spectrum is counted by the sheaf cohomologies of the line bundles (6.20) The first two and the fourth line bundle manifestly arise by pullback of a line bundle on the toric base P 3 Q . Therefore we can resolve these bundles by Koszul resolutions, formed from vector bundles on P 3 Q . For all of these vector bundles it is possible to compute the cohomology dimensions e.g. via cohomCalg [97][98][99][100][101].
In general this information alone does not suffice to determine the cohomology dimensions of a pullback line bundle uniquely, rather the maps in the resolution need to be taken into account. However, in fortunate cases the exact sequences describing the resolution involve a sufficient number of zeroes which allow one to predict the cohomology dimensions of the pullback line bundle without any knowledge about the involved mappings in the resolution. Indeed, the bundles on C 10 1 , C 5 3 and C 1 5 are such fortunate instances. Therefore we are able to determine their cohomology dimension along the algorithms implemented in the Koszul extension of cohomCalg [97][98][99][100][101] Note that as a consequence of the zeroes in the resolution, these values are independent of the complex structure moduli of the matter curves. In fact, if the matter curves in question were smooth, the above results for the cohomology groups on C 10 1 and C 5 3 would follow already from the Kodaira vanishing theorem and the Riemann-Roch index theorem. 28 By contrast, to determine the cohomology dimensions of the line bundle L(A, C 5 −2 ) we have to invoke the machinery described in Section 6.1 as this line bundle does not descend from a line bundle on P 3 Q . As it turns out, the result is sensitive to the actual choice of Tate polynomials a ij , i.e. of complex structure moduli defining the elliptic fibration.
For a number of choices, we compute an f.p. graded S-module M and then deduce the cohomology dimension of M by use of the technologies described in Appendix D.10. The relevant technical details of the modules involved are displayed in Appendix E. We summarise our findings in the following table: Module In particular we observe jumps in the cohomology dimensions of the line bundle on C 5 −2 as we wander in the moduli space of the elliptic fibration π : Y 4 ։ B 3 . E.g. moving from the first line to the second, we observe that a pair of a chiral and anti-chiral (super)-field becomes massive, and is therefore no longer accounted for by the massless spectrum. In moving to the last line, another 16 such pairs become massive.
Conclusion and Outlook
In this work we have taken what we believe is an important step forward in our understanding of F-theory vacua beyond the computation of topologically protected quantities such as chiral indices or gauge anomalies. By extending the framework developed in [22] we have computed the exact massless charged spectrum in F-theory compactifications to four dimensions. Our first main result is to extract the gauge bundles on matter curves induced in presence of all types of gauge backgrounds underlying gauge fluxes in H 2,2 vert ( Y 4 ). This includes both the gauge backgrounds in the presence of U (1) gauge group factors, studied already in [22], and all additional types of vertical gauge backgrounds, which we have called matter surface fluxes. The gauge bundle induced on the matter curves by this second type of backgrounds pushes forward to a coherent sheaf -as opposed to a line bundle -on the ambient space of the curve. Our second main result is to apply and further develop methods from constructive algebraic geometry to calculate the associated sheaf cohomology groups. This technique has allowed us to determine the exact massless charged matter spectrum in an F-theory vacuum with gauge group SU (5) × U (1). In particular we have explicitly observed an explicit dependence on the number of chiral-anti-chiral pairs of massless matter on the complex structure moduli.
The framework developed here opens up many new directions both of conceptual interest and of practical relevance. In [112] we will apply similar techniques to evaluate also the cohomology groups associated with non-vertical gauge backgrounds, in particular of the type which in the F-theory GUT literature goes by the name of 'hypercharge flux.' Again these backgrounds have the property that they do not descend from line bundles on the base and hence the full power of the machinery to compute sheaf cohomology groups will be at work.
As a spin-off of our investigation of the Chow groups describing the gauge backgrounds we will present in [39] an intriguing set of relations between cohomology classes of rational 2cycles. These will be proven to hold on every smooth elliptically fibred Calabi-Yau 4-fold as a consequence of anomaly cancellation in the associated F-theory vacuum. This generalizes and extends observations made in [38]. We conjecture that these relations hold even in the Chow ring, as we verify in non-trivial examples. In fact, said relations in the Chow ring have been used in the present work in order to simplify the intersection theoretic operations which extract the gauge bundles on the matter curves. They are yet another manifestation of the close interrelations between the consistency conditions of effective field theories obtained from string theory and the geometry of the compactifications spaces.
More generally it would be desirable to advance our understanding of the second Chow group on elliptic 4-folds further. There are two aspects to this, one depending on the fibration and one depending on the explicit choice of base. Concerning the first, the example fibration studied in this work has the property that h 2,1 ( Y 4 ) = 0 so that the intermediate Jacobian in (2.3) is trivial. As a result, Deligne cohomology and ordinary cohomology coincide. Despite this simplification it is important to perform all computations within the Chow ring, as done in this work, if we want to extract the exact matter spectrum and not only the chiral index. Nevertheless it would be exciting to explore gauge backgrounds associated with non-trivial, but flat configurations of C 3 as encoded in a non-trivial intermediate Jacobian. A generalized Abel-Jacobi map relates these data to the kernel of the cycle map from CH 2 ( Y 4 ) → H 2,2 alg ( Y 4 ). This way, both continuous flat C 3 connections and discrete C 3 backgrounds from torsional H 2,1 ( Y 4 ) can arise. Various aspects of the intermediate Jacobian in F-theory compactified on elliptic 4-folds have been studied recently in [113,114]. Concerning the base B 3 , the explicit example we have studied is manifestly torsion free, and we have therefore not encountered any effects from torsional 4-cycles on the base. It would be interesting to detect such effects by modifying our computations.
An interesting outcome of our investigations is the aforementioned jump in the number of massless states as we vary the complex structure moduli. From general field theory reasoning such jumps in moduli space are clearly expected. They are the manifestation of the lifting of vectorlike pairs as we vary the vacuum expectation value of some of the chiral fields of the model. Analogous effects have been studied intensively for heterotic compactifications such as [115][116][117] and references therein. It would be exciting to determine the minimal number of vectorlike pairs for a given topological type of F-theory model as we vary the complex structure moduli and to interpret this result from an effective field theory point of view.
In fact the explicit computations in this work have been performed at highly non-generic points in moduli space. The practical reason behind this was the need to reduce the complexity of the involved polynomials. Only this reduction allowed gap [46] to model the line bundle in question by an f.p. graded S-module M . This limitation in turn is caused by the involved Gröbner basis algorithms. Recall that the module M sheafifes to give a coherent sheaf M . The computation of the sheaf cohomologies of M involves Gröbner basis algorithms as well. If we restrict ourselves to the computation of only the cohomology dimensions, then it is indeed possible to apply algorithms in which the Gröbner basis computations are replaced by Gauss eliminations. For the latter far more performant algorithms exist, e.g. in MAGMA [118]. Consequently, this approach increased the performance of our algorithms a lot. However, both computing models M for line bundles at more general points of the moduli space and subsequently identifying an explicit basis of the cohomology groups of M hing on more efficient Gröbner basis algorithms. It is therefore desirable to find improvements to such algorithms.
We have focused, in this article, on the computation of the massless matter spectrum in Ftheory compactified to four dimensions. As stressed already in the introduction, in the recently explored F-theory compactifications to two dimensions [11,12] the massless matter spectrum likewise depends on the gauge background, which can be described in very similar terms. It will be interesting to extend our formalism to F-theory compactifications on Calabi-Yau 5-folds.
Finally, we have insisted throughout this work that the elliptic fibration Y 4 be smooth in order to avoid dealing with singularities. Since resolving an elliptic fibration amounts to moving along the Coulomb branch of the dual three-dimensional M-theory vacuum this limits ourselves to studying Abelian gauge backgrounds. Similar challenges arise when it comes to describing certain T-brane configurations which obstruct a resolution of the 4-fold on singular spaces and force us to work on singular spaces instead [21,77,78,103,119]. We are optimistic that these are no unsurpassable obstacles. In particular, a generalisation of intersection theory within the Chow ring on non-smooth varieties exists, and it is hence a natural question how far one can push the present formalism concerning more general, non-Abelian backgrounds. We look forward to coming back to these questions in the near future.
A.1. Rational Equivalence and the Refined Cycle Map
In this appendix we describe the parametrisation of elements in H 4 D ( Y 4 , Z(2)) representing the 3-form gauge background in F/M-theory on a smooth elliptic 4-fold Y 4 in terms of the Chow group CH 2 ( Y 4 ) of complex 2-cycles modulo rational equivalence. For further details we also refer to [22].
The group of algebraic cycles
for suitable N ∈ N, n i ∈ Z and C i not necessarily smooth but irreducible subvarieties of Y 4 . 29 An algebraic cycle C ∈ Z p ( Y 4 ) is rationally equivalent to zero, C ∼ 0, if and only if for suitable N ∈ N and invertible rational functions formed from all algebraic cycles which are rationally equivalent to 0. Then the Chow group CH p ( Y 4 ) is defined as the quotient To any algebraic cycle in Z p ( Y 4 ) one can associated a cocycle in H p,p Z ( Y 4 ) via a group homomorphism γ Y 4 ,p : Z p ( Y 4 ) → H p,p Z ( Y 4 ) which is termed the cycle map.
Next let us explain how we specify 3-form data, given by elements of H 4 D ( Y 4 , Z(2)), by a class of algebraic cycles in CH 2 ( Y 4 ). The key insight is that there exists a so-called refined cycle map (see e.g. p. 123 in [70]) This morphism is a group homomorphism and respects rational equivalence. Hence given algebraic cycles C 1 , C 2 ∈ Z p ( Y 4 ) it holds Therefore the refined cycle map extends to a map CH 2 ( Y 4 ) → H 4 D ( Y 4 , Z (2)). In concrete applications, it is oftentimes possible to express an algebraic cycle A ∈ Z 2 ( Y 4 ) in terms of data of a toric ambient space. This is possible whenever Y 4 can be embedded into a 29 We use the symbol Z p ( Y4) to denote the group of algebraic cycle of complex codimension p in Y4. In contrast, Zp( Y4) is to denote the group of algebraic cycle of complex dimension p in Y4. We adopt this notation also for the Chow groups, i.e. CH 2 ( Y4) represents classes of algebraic cycles of codimension 2 in Y4, whilst e.g. CH1( Y4) is for classes of algebraic cycle of dimension 1 in Y4. smooth toric variety X Σ . Let j : Y 4 ֒→ X Σ denote the corresponding embedding. Given that X Σ is smooth it is known that this map indeed induces pullback maps of the Chow groups Let S be the Cox ring of X Σ (over Q), I SR ≤ S its Stanley-Reisner ideal and I LR ≤ S the ideal of linear relations. Then by smoothness of X Σ it even holds [94] CH Consequently, in such situations modifications of the pre-image of A on X Σ which leave its class in H • (X Σ , Z) unchanged do not alter the gauge background described by A via the refined cycle map. Indeed, we will make use of this freedom to simplify our computations later.
By use of the commutative diagram 30 in Figure 4 we can now summarise our strategy as follows: 1. Specify A ∈ Z 2 (X Σ ). Use manipulations respecting the homology class associated with A to simplify this algebraic cycle or represent it differently whenever necessary.
γ(a) ∈ H
Note that in pratical applications, including the model presented in 6.2), it can happen that the toric ambient space of Y 4 is not smooth, but rather a complete toric orbifold. Since such a toric variety is simplicial it follows from theorem 12.5.3 in [94] that Therefore we should wonder if even in such a geometric setup, the pullback j * : CH(X Σ ) → CH( Y 4 ) is well-defined. Hence note that this very pullback amounts to computing intersection products of elements of CH(X Σ ) and Y 4 , and this intersection is well-defined as long as the embedding ι : Y 4 ֒→ X Σ is a closed embedding and Y 4 is a local complete intersection [22], as these conditions guarantee a well-defined Gysin-homomorphism. Hence, under these assumptions even for a complete toric orbifold X Σ we can start with elements A ∈ Z 2 (X Σ ), modify them by manipulations which respect the homology class of A in X Σ to simplify computations, and use the pullback of this cycle to Y 4 to model G 4 -fluxes on Y 4 .
Finally, let us mention that in this article we will not distinguish algebraic cycles A ∈ Z 2 ( Y 4 ) from their classes in CH 2 ( Y 4 ). Rather we use capital and upright letters to denote both the relevant class and (if necessary) an explicit representant of this class. Similarly A will denote elements of CH 2 (X Σ ) and Z 2 (X Σ ) depending on the context. Whenever it is not explicitly necessary to assume a toric ambient space X Σ we will use the label X 5 for a (non-toric) 5 complex-dimensional ambient space of Y 4 .
A.2. Intersection Product and Extraction of Line Bundles
The Chow ring is endowed with a natural intersection product. To introduce this intersection product, let f : X → Y be a morphism from a variety X (not necessarily smooth) to a smooth γ (a)
Figure 4: Summary on how (classes of) algebraic cycles
is a well-defined mapping. Given α ∈ CH k (X), β ∈ CH l (Y ) we denote the image of α ⊗ β under the above map as α · f β. We term this the intersection product of α, β under the morphism f . We will oftentimes omit f .
Next recall that we have matter curves C R ⊆ B 3 over which the elliptic fibre degenerates. Therefore the generic fibre over these matter curves in the resolved 4-fold Y 4 π → B 3 is such that π −1 (C R ) ⊆ Y 4 is a sum of P 1 's (the rational curves into which the T 2 splits along these loci) fibred over C R . Formal sums of P 1 's fibred over C R constitute the matter surfaces S a R -each of which represents a state of localised matter in representation R with weight β a (R) over the matter curve C R .
Let ι R,a : S a R ֒→ Y 4 be the canonical embedding of this matter surface into Y 4 . In a slight abuse of notation we also denote the class of this matter surface by S a R ∈ CH 2 (S a R ). Moreoever let π R,a be the projection π R,a : S a R ։ C R . (A.10) Then we can compute the intersection product S a R · ι R,a A ∈ CH 0 (S a R ), where A ∈ CH 2 ( Y 4 ) represents a C 3 background. Consequently L R,a := π R,a * S a R · ι R,a A ∈ CH 0 (C R ) ∼ = Pic (C R ) (A.11) defines a line bundle on the matter curve C R .
In explicit examples, one of the obstacles to overcome is the computation of the relevant intersection products. We have presented two ways to perform this computation. Either, we proceed as detailed in Section 3.3 and Section 5.1. Alternatively, in suitable models we have an embedding j : Y 4 ֒→ X Σ . For the fluxes of our interest it is then possible to describe A ∈ CH 2 ( Y 4 ) as A ∈ CH 2 (X Σ ). We then pick an explicit algebraic cycle to represent this class, and manipulate this cycle by use of I SR (X Σ ) and I LR (X Σ ). These manipulations allow us to ensure for transverse intersections, in which case it suffices to merely compute the set-theoretic intersections.
A similar computation allows us to extract the line bundles on the non-Abelian brane stacks along ∆ I , as described in Section 2.3.2: Here we consider the resolution divisor E i I with projection π i I : E i I ։ ∆ I and canonical embedding ι i I : E i I ֒→ Y 4 , giving rise to (A.12)
B. Fibre Structure of a Resolved Tate Model
In this appendix we analyse the fibre structure of the resolved 4-fold π : Y 4 ։ B 3 of the F-theory GUT models described in Section 4.1, which are based on [29].
The general philosophy is to identify P 1 -fibrations -of which some will turn out to be present only over 'special' subloci of S GUT , namely the matter curves and the Yukawa points -and then to work out their intersection numbers in the fibre. The discussion of the massless spectrum of matter surface fluxes, e.g. in Section 5, relies heavily on this information. This in turn is the main reason why we wish to present the details of the fibre structure here.
The fibre structure of the SU (5) × U (1) X -top in question was originally worked out in [29]. We refine this analysis in two important respects: • A P 1 -fibration present over generic points of S GUT -that is points which are not contained in any matter curve -in general splits into a formal linear combination of P 1 -fibrations over the matter curves. Similarly a P 1 -fibration present over generic points of the matter curves -i.e. non-Yukawa points of the matter curves -in general splits into a formal linear combination of P 1 -fibrations. In order to deduce these splittings we use primary decompositions of the relevant ideals, which differs from the approach in [29]. In consequence we find different defining equations for P 1 3G (5 −2 ), P 1 3H (5 −2 ) and different splittings for these fibrations once restricted to the Yukawa locus Y 1 , namely • In [29] the intersections of the P 1 -fibrations were identified to have the structure of certain Dynkin diagrams. However, the intersection numbers -in particular the self-intersection numbers -were not stated. Here we work out these numbers. To describe our findings let us introduce the notation T 2 (C R ) to indicate the total elliptic fibre over a matter curve C R . Similarly T 2 (Y i ) is to indicate the total elliptic fibre over the Yukawa locus Y i . Furthermore let P i (C R ) and P i (Y j ) denote a P 1 -fibration over a matter curve C R and a Yukawa locus Y j respectively. Then for all matter curves C R and all Yukawa loci Y j described by the SU (5) × U (1) X -top in question, the following holds true: Next suppose Y j ⊆ C R and that the splitting of P 1 i (C R ) onto Y j takes the form Naively we might expect As it turns out, in the geometry described by the SU (5) × U (1) X this need not be the case. Rather it fails precisely for the restriction of C 101 to Y 1 which involves either P 1 24 (Y 1 ) or P 1 2B (Y 1 ). The reason for this failure is that these two P 1 -fibrations encounter a Z 2singularity over their intersection point. This parallels the situation studied for the enhancement A 5 ֒→ E 6 in [120]. As a consequence we find half-integer intersection numbers A particular example where (B.4) fails is follows: Before we proceed, let us mention that we work with the triangulation T 11 of [29] throughout this entire article.
B.1. Intersection Structure away from Matter Curves
We start by looking at the five divisors E i := V (P ′ T , e i ) ⊆ Y 4 for 0 ≤ i ≤ 4. Note that e i = 0 automatically implies that the 'new' GUT-coordinate e 0 e 1 e 2 e 3 e 4 vanishes. Therefore E i indeed is a subset of Y 4 . These subsets can be understood as fibration of the i-th exceptional divisor over the GUT-surface S GUT . Now let p ∈ S GUT a point which is not contained in any matter curve. By means of the projection map π : Y 4 ։ B 3 we can describe the fibre over the point p as π −1 (p). We now wish to work out the intersection structure of the divisor classes E i in π −1 (p). For simplicity we merely focus on the set-theoretic intersection E 0 ∩ E 1 . By use of the Stanley-Reißner ideal of the top -see Section 4.1 -it is readily confirmed that where p = [p 1 : p 2 : · · · : p n−1 : e 0 = 0] are inhomogeneous coordinates of the point p. Hence we have found a single intersection point in π −1 (p). This finding can be made precise to state that in π −1 (p) the divisor classes E 0 and E 1 intersect with intersection number 1 [29]. Moreover this analysis is easily repeated for all intersections of the divisor classes E i , 0 ≤ i ≤ 4.
To compute U (1) X -charges, intersection numbers with the U (1) X -generator U X are required. These intersections involve [29] where k B 3 is a polynomial in the coordinate ring of X 5 such that its degree matches K B 3 . To simplify notation we set α = V (P ′ T , s), β = V (P ′ T , z) and γ = V (P ′ T , k B 3 ). The intersection numbers are then as follows:
(B.15)
β indicates the Cartan charges of such a linear combination, i.e. lists the intersection numbers with the resolution divisors E i , 1 ≤ i ≤ 4. We will adopt these notations also for the other matter curves. All that said, the matter surfaces over C 10 1 take the following form: Intersection Structure over C 5 3 away from Yukawa Loci Over C 5 3 the following six P 1 -fibrations are present: (B.17) The total elliptic fibre over C 5 3 is given by They above P 1 -fibrations emerge from the E i according to the following table.
Original Split components over Over p ∈ C 5 3 which is not a Yukawa point, these P 1 -fibrations intersect in π −1 (p) as follows: The intersections with the pullbacks of the divisors E i onto the fibre over C 5 3 are as follows.
The matter surfaces over C 5 3 are: Intersection Structure over C 5 −2 away from Yukawa Loci By primary decompositions it is readily confirmed that over C 5 −2 the following P 1 -fibrations are present: Note that these results differ from [29], where primary decomposition was not applied. The total elliptic fibre over C 5 −2 is given by The above P 1 -fibrations emerge from the E i according to the following table.
Original Split components over Over p ∈ C 5 3 which is not a Yukawa point, these P 1 -fibrations intersect in π −1 (p) as follows: The intersections with the pullbacks of the divisors E i onto the fibre over C 5 −2 are as follows.
The matter surfaces over C 5 −2 are: Intersection Structure over C 1 5 away from Yukawa Loci Over the singlet curve C 1 5 = V (P ′ , a 3,2 , a 4,3 ) the following two P 1 -fibrations are present: These fibrations intersect in π −1 (p) as follows: The intersection numbers with the divisors E i are given by: 31 Note that only the P 1 -fibration P 1 A (1 5 ) has vanishing intersection with the zero-section β = V (z) and hence defines a viable matter surface. As this fibration satisfies q = 5, we denote the singlet curve as C 1 5 .
B.3. Intersection Structure over Yukawa Loci
Intersection Structure over Yukawa Locus Y 1 Over the Yukawa point Y 1 = V (w, a 1,0 , a 2,1 ) the following six P 1 -fibrations are present: The total elliptic fibre over Y 1 is given by Starting from C 10 1 , the above fibrations emerge from the following splitting process: Split components over C 10 1 X Split components over Y 1 P 1 0A (10 1 ) The splitting behaviour, when approached from C 5 −2 , is different: The intersection numbers in the fibre over Y 1 are as follows: Intersection Structure over Yukawa Locus Y 2 Over the Yukawa locus Y 2 = V (w, a 1,0 , a 3,2 ) the following seven P 1 -fibrations are present: The total elliptic fibre over Y 2 is given by . (B.38) The above P 1 -fibrations result from splittings the corresponding fibrations over C 10 1 according to the following table.
Split components over C 10 1 X Split components over Y 2 P 1 0A (10 1 ) When approached from C 5 3 the spittings are different, namely Split components over C 5 3 X Split components over Y 2 Finally, the splitting as seen from C 5 −2 , is yet again different: Split components over C 5 −2 X Split components over Y 2 The intersection numbers in the fibre over Y 2 are as follows: Intersection Structure over Yukawa Locus Y 3 Over the Yukawa locus Y 3 = V (w, a 32 , a 43 ). There the following seven P 1 -fibrations are present: The total elliptic fibre over Y 3 is given by 44) The individual P 1 -fibrations appear from the split components over C 5 3 as follows: Split components over C 5 3 X Split components over Y 3 However, if we approach Y 3 from C 5 −2 we have the following behaviour.
Split components over C 5 −2 X Split components over Y 3 Finally, when approached from the singlet curve C 1 5 we have the following splitting: Split components over C 1 5 X Split components over Y 3 The intersection numbers in the fibre over Y 3 are:
C. Line Bundles Induced by Matter Surface Fluxes from Ambient Space Intersections
In this appendix we compute the line bundles induced by the matter surface fluxes. As in Section 5.2 we perform the following steps: • The SU (5)× U (1) X -top induces the linear relations (4.8) on the ambient space X 5 . Gauge backgrounds represented by elements of CH 2 ( X 5 ) can thus be altered upon use of these linear relations.
• In doing so, we ensure that the intersections between gauge backgrounds A ∈ CH 2 ( X 5 ) and matter surfaces S a C R ∈ CH 3 ( X 5 ) are transverse. Consequently the relevant intersections can then be worked out merely from the corresponding set-theoretic intersections.
• Given a gauge invariant background, we compute these intersections for one matter surface over each matter curve and project the result to the matter curve C R . Tensoring the associated line bundle with the spin bundle K C R induced by the holomorphic embedding of the matter curve gives the final result for the line bundle L(S R , A) such that H i (C R , L(S R , A)) counts the massless matter in representation R.
Let us mention that the following computations make us of rational equivalence on B 3 to (re)express cycles in the classes K B 3 and W whenever this simplifies the computations. Furthermore, we stress again that we are assuming that B 3 is torsion-free. Together with the requirement that H 1 (B 3 , Q) = 0, which follows from the fact that Y 4 is a 'proper' Calabi-Yau, this implies that H 1 (B 3 , Z) = 0 and thus CH 1 (B 3 ) = H 1,1 (B 3 ).
C.1. Line Bundle Induced by ∆A(λ)
As a preparation we show that the gauge background projects to the trivial line bundle on all matter curves. This result will be used in the next section.
Line Bundle Induced on C 10 1 By use of the linear relations we have By use of the Stanley-Reisner ideal it then follows P 1 4D (10 1 ) · ∆A (λ) = −V a 10 , e 0 , e 4 , y, xse 2 e 3 + a 21 z 2 e 0 = ∅ .
(C.3)
Line Bundle Induced on C 5 3 By use of the linear relations we can write By use of the Stanley-Reisner ideal it then follows −1P 1 3x (5 3 ) · ∆A(λ) = ∅.
(C.6)
Now we project this locus down onto C 5 −2 . Thereby we find Line Bundle Induced on C 1 5 By use of the linear relations we can write Let k B 3 and w be polynomials in the coordinate ring of X 5 with degree of K B 3 and W respectively. Then it follows by use of the Stanley-Reisner ideal that Next we project down this locus onto C 1 5 . Thereby we find
C.2. Line Bundles Induced by A(10 1 )(λ) and A(5 3 )(λ)
In this subsection we will work out the massless spectrum for the following two matter surface fluxes: The relevant matter surface is −P 1 3x (5 3 ) = −V (a 3,2 , e 3 , x). Upon use of the linear relations induced from the SU (5) × U (1) X -top, we can write From this it follows In this expression k B 3 is a polynomial in the coordinate ring of X 5 whose degree matches K B 3 . Consequently π C 5 3 * (−P 1 3x (5 3 ) · A(10 1 )(λ)) = 2λ 5 Y 2 ∈ CH 1 (C 5 3 ), which implies Line Bundle Induced by A(10 1 )(λ) on C 5 −2 The relevant matter surface is P 1 3H (5 −2 ) = V (e 3 , a 21 e 0 xze 1 e 2 − a 10 y, a 43 e 0 xze 1 e 2 − a 32 y). Upon use of the linear relations induced from the SU (5) × U (1) X -top, we can write (C.14) From this it follows (C. 15) In this expression k B 3 and w are polynomials in the coordinate ring of X 5 whose degree match K B 3 and W respectively. Consequently (C. 16) This leads us to conclude Line Bundle Induced by A(10 1 )(λ) on C 1 5 The relevant matter surface is P 1 A (1 5 ) = V (s, a 3,2 , a 4,3 ). Upon use of the linear relations induced from the SU (5) × U (1) X -top we can write From this it follows where the polynomials k B 3 and w are picked as before. Consequently Line Bundle Induced by A(10 1 )(λ) on C 10 1 To compute the massless spectrum of A(10 1 )(λ) on C 10 1 we note that We found in Appendix C.1 that the line bundle induced by A(λ) on the matter curves is trivial. The massless spectrum of A X (−λW ) follows from Section 5.2. In particular we have The line bundle induced by A(10 1 )(λ) on C 10 1 follows from D(S 10 1 , A(5 3 )(λ)). So let us compute this divisor now. We first recall that the relevant matter surface is given by P 1 4D (10 1 ) = V (a 1,0 , e 4 , xse 2 e 3 + a 2,1 z 2 e 0 ). Upon use of the linear relations we can write From this it follows This finally enables us conclude that D(S 10 1 , A(10 1 )(λ)) = 3λ
Line Bundles Induced by A(5 3 )(λ)
We can now turn the logic around and use the relation to compute the massless spectrum of A(5 3 )(λ) from knowledge of the massless spectra of all other fluxes. This leads to the results summarised in Table 5.1.
Line bundle induced by
To compute this spectrum we introduce Upon use of the linear relations induced from the SU (5) × U (1) X -top we can write (C.43) By use of the Stanley-Reisner ideal it is readily verified that (C. 44) By projecting this quantity onto C 1 5 we obtain That said, we can now use (C.42) to compute the massless spectrum of A(1 5 )(λ) on C 1 5 . This yields
D. From Modules to Coherent Sheaves and Sheaf Cohomology Groups
This appendix provides some of the mathematical background underlying the computations outlined in Section 6. Our task which arises in the main text is the following: Given a toric variety X Σ with Cox ring S, a matter curve C ⊆ X Σ and a divisor D ∈ Div(C), construct f.p. Smodules M ± such that M ± ∈ Coh (X Σ ) are supported on C only and satisfy M ± This construction makes use of the sheafification functor which turns an f.p. graded S-module M into a coherent sheaf M on X Σ . See [94,105,121] and references therein for further information.
We give details on this construction in Appendix D.9. As a preparation we begin with a general review of the category of f.p. graded modules in Appendix D.1 and Appendix D.2 and explain the computation of extension modules in Appendix D.11. The connection to sheaves (Appendix D.3) is described in Appendix D.4 to Appendix D.6 in general terms, and specialised to toric varieties in Appendix D.7 to Appendix D.9. Finally, Appendix D.10 describes how to extract the sheaf cohomologies of O C (±D) from M ± . Some words of caution before we start: In a supersymmetric context, it is common pratice to work with analytic geometry in physics. Unfortunately since the complex numbers C are not suitable to be modelled in a computer, it is necessary for us to switch to a finite field extension of the rational numbers Q. The same limitations hold for holomorphic functions, i.e. absolutely convergent power series. Hence computer applications require us to switch to algebraic geometry over the rational numbers Q (or finite field extensions thereof such as Q[i] = Q + iQ). Therefore, unless stated explicitly, this appendix works with (toric) varieties over Q with Cox ring S. We will provide a basic introduction to these spaces from the point of view of algebraic geometry. For simplicity, this appendix is formulated in the language of varieties. However a scheme-theoretic approach is indeed possible. The interested reader may consult [122][123][124][125] and references therein.
D.1. The Category of Projective Graded S-Modules
We assume that X Σ is a normal toric variety over Q which is either smooth and complete or simplicial and projective. For such varieties X Σ the coordinate ring S -termed the Cox ring -is a polynomial ring S = Q[x 1 , . . . , x m ] which is graded by Cl(X Σ ) ∼ = Z n . This means that there is a homomorphism of monoids such that the images of the monomials x 1 , . . . , x m generate Z n as a group. Here Mon(S) denotes the set of monomials in S. For a monomial f ∈ Mon(S) we term deg(f ) the degree of f . A polynomial P ∈ S for which all its monomials have identical degree d ∈ Z n is termed homogenous polynomial (of degree d). The homogeneous elements of degree d form a subgroup S d of S. As a group, the ring S therefore admits a direct sum decomposition such that the multiplication in S satisfies S d · S e ⊆ S d+e for all d, e ∈ Z n . We term the group S d the degree d layer of S.
Given a Z n -graded Cox ring S = Q[x 1 , . . . , x m ], we can define for every d ∈ Z n a degree-shift of this ring. Namely S(d) is the Z n -graded ring with S(d) e = S(0) e+d ≡ S e+d .
As an example consider P 2 Q . This toric variety has Cl(P 2 Q ) = Z. Its Cox ring Q[x 1 , x 2 , x 3 ] is Z-graded upon deg(x 1 ) = deg(x 2 ) = deg(x 3 ) = 1. In particular 1 ∈ S(0) satisfies deg(1) = 0. Now consider the ring S(−1). By definition it satisfies Consequently those x ∈ S(0) which have degree 0 are considered elements of degree 1 in S(−1). For example 1 ∈ S(−1) therefore satisfies deg(1) = +1. such that for all s 1 , s 2 ∈ S and all m 1 , m 2 ∈ M it holds • s 1 · (s 2 · m 1 ) = (s 1 · s 2 ) · m 1 , Consequently a (left) S-module M looks very much like a 'vector space over S', except for the fact that S need not be a field. An S-module is called graded precisely if S i M j ⊆ M i+j . Note that the indexing set I need not be finite. However, if I is finite, then |I| is termed the rank of M . For reasons that will become clear eventually, we will refer to such modules as projective graded (left) S-modules. For ease of notation we will oftentimes drop the term 'left'. So unless stated explicitly, we always mean left-modules.
The morphisms in the category of projective graded (left) S-modules are module homomorphisms which respect the grading. For example ϕ : S(−1) Given the projective graded S-modules M and N of finite rank m and n, a morphism M → N is given by a matrix A with entries from S. It is now a pure matter of convention to express elements of M , N either as columns or rows of polynomials from S. Suppose that we express e ∈ M , f ∈ N as rows of polynomials, then A has to be a matrix with m rows and n columns. In particular, we multiply e ∈ M from the left to the matrix A to obtain its image e · A ∈ N . As this multiplication is performed from the left, this convention applies to left-modules.
Of course, one can also choose to represent elements of M and N as columns. In this case A must be a matrix with n rows and m columns and we multiply e ∈ M from the right to obtain its image A · e ∈ N . In this case one deals with projective graded right S-modules and projective graded right S-module homomorphisms.
For historical reasons, it is tradition in algebra to use left-modules in papers, and we follow this tradition here. Hence elements of projective graded S-modules are always expressed as rows of polynomials in S.
Given a Z n -graded ring S, the category of projective graded S-modules happens to be an additive monoidal category, which is both strict and rigid symmetric closed [47][48][49]. We provide an implementation of this category in the language of CAP [47][48][49] in the software package [106].
D.2. The Category S-fpgrmod
Based on a Z n -graded ring S = Q[x 1 , . . . , x m ] and its associated category of projective graded S-modules, we now wish to build a new category -the category of f.p. graded S-modules (S-fpgrmod). The importance of this construction for us is that it allows us to describe ideals (or vanishing loci) via the relations enjoyed by the generators of the ideal. In order to understand this, we need a bit of preparation.
The basic idea is very simple: Objects in S-fpgrmod are presented by morphisms of projective graded S-modules of finite rank.
For an example let us pick P 2 Q again and look at the following two morphisms of projective graded S-modules (of finite rank): • ϕ : 0 → S (0).
Abstractly we intend to describe the modules A means to present these modules M ϕ , M ψ is indeed provided by the morphisms ϕ, ψ. Therefore we term the codomain of ϕ, ψ the generators of M ϕ and M ψ respectively. Similarly the domain of ϕ, ψ is given the name relations of M ϕ and M ψ respectively.
In the following we will make use of commutative diagrams of projective graded S-modules. In these diagrams we box morphisms of projective graded S-modules (of finite rank) in blue colour if they are to present an f.p. graded S-module. Consequently we depicture M ψ , M ϕ as follows: Obviously the relations of M ϕ are 0. Therefore M ϕ is canonically isomorphic to the projective graded S-module S(0). M ψ however is not quite so simple -its generators have to satisfy 3 relations. Let us work out these relations in detail. To this end we first identify generating sets: Consequently we have (D.10) Now let us think of the cokernel of ψ in terms of classes. Then the representants of these classes are not unique, but can be chosen up to addition of elements of the form −x 3 g 2 + x 2 g 3 . In fact, there are two more such redundancies, which follow from the images of r 2 and r 3 . Namely In this sense there are the 3 relations (D.10) and (D.11) for the generators g 1 , g 2 , g 3 of M ψ .
Let us now turn to the morphisms in S-fpgrmod. A morphism M ψ → M ϕ of f.p. graded S-modules is a commutative diagram of the following form: Recall that we are working with projective graded left S-module homomorphisms. Therefore the commutativity is to say that A · 0 = R · B. We say that the above morphism is congruent to the zero morphism 32 precisely if there exists a morphism of projective graded S-modules γ : S(−1) ⊕3 D − → 0 such that the following diagram commutes: Intuitively, the existence of such a morphism implies that all generators of M ψ can be thought of as relations of M ϕ . A particular example of such a morphism of f.p. graded S-modules is as follows: It is readily seen that this morphism is not congruent to the zero morphism. To gain some intuition, let us investigate this morphism in more detail. To this end recall the generating sets R and G of domain and codomain of ψ as introduced above. In particular we can apply the displayed mapping of projective graded modules to the elements g i ∈ S(−1) ⊕3 . Thereby we find Note that there is a slight difference between the notion of a classical category and a CAP-category. The latter comes equipped with the additional datum of congruence of morphisms. Upon factorisation of this congruence, a CAP-category turns into the corresponding classical category. For ease of computer implementations, the congruences are added as additional datum. See [47][48][49] for more information.
Similarly g 2 maps to x 2 and g 3 to x 3 . We denote these images by H = {h 1 , h 2 , h 3 }. Note that the images of the relations map to zero, e.g. −x 3 g 2 + x 2 g 3 turns into 16) In fact it turns out that the map S(−2) ⊕3 R → S(−1) ⊕3 is the kernel embedding of the following morphism of projective graded S-modules For this very reason, the morphism in (D.14) is a monomorphism of f.p. graded S-modules.
Consequently it describes the embedding of an ideal into S(0), and this very ideal is the one generated by x 1 , x 2 , x 3 . Therefore M ψ is nothing but a presentation of the irrelevant ideal B Σ = x 1 , x 2 , x 3 ⊆ S of P 2 Q and the morphism in (D.14) is its standard embedding B Σ ֒→ S. To emphasize this finding we extend the diagram to take the following shape: It is of crucial importance to distinguish the black and red arrows in this diagram. The black ones are morphisms of projective graded S-modules, whilst the red ones mediate between f.p. graded S-modules.
We have therefore arrived at the representation of an ideal -here the ideal with generators B Σ = x 1 , x 2 , x 3 -via its relations in form of an f.p. graded S-module. This is precisely the connection promised at the beginning of this section.
Let us use this opportunity to point out that for a given f.p. graded S-module, there exist numerous presentations. For example the following is an isomorphism of two presentations that are both canonically isomorphic to the projective graded S-module S(0): Note also that the rank of generators and the rank of relations are unrelated. We have already given examples of f.p. graded S-modules for which the rank of the relations is either smaller than or identical to the rank of the generators. Finally consider the ideal x 2 1 , x 1 x 2 , x 1 x 3 , x 2 2 , x 2 x 3 and let Then the standard embedding of this ideal takes the following form: Hence the 5 generators of this ideal satisfy 6 relations.
It can be proven that the category S-fpgrmod is an Abelian monoidal category which is both strict and symmetric closed. See [47][48][49] for further details. This category being Abelian, kernel and cokernel exist for all morphisms in S-fpgrmod. Let us use this opportunity to display the kernel and cokernel of ι : B Σ ֒→ S in the following diagram: S-fpgrmod being Abelian, a morphism is a monomorphism precisely if its kernel object is the zero object. The trivial box on the left hence reflects the fact that ι : B Σ ֒→ S(0) is a monomorphism. The object boxed on the very right is a factor object. Such factor objects will later serve as models for structure sheaves of subloci of X Σ . We will make use of this in Appendix D.9.
Quite generally, it is possible to associate to a given category C its category of morphisms. An implementation of this mechanism is provided in the gap-package [107]. Applying this technique to the category of projective graded S-modules, as introduced in this subsection, provides the category S-fpgrmod. Along this philosophy, this very category is implemented in the language of CAP [47][48][49] in the software package [108].
Finally a word on the terminology of the projective graded S-modules (of finite rank). These modules are canonically embedded into the category S-fpgrmod. In the latter they constitute the projective objects. Hence their name.
D.3. A Briefing on Sheaves
Since our next goal is to understand the sheafifcation of the modules constructed so far, we now include a brief review of the definition of a sheaf. Experts may safely skip this standard exposition. Let (X, τ ) be a topological space. A presheaf F of Abelian groups on (X, τ ) consists of Abelian groups F(U ) for all open U ⊆ X and group homomorphisms -termed restriction maps for all open V ⊆ U , such that the following conditions hold true: The elements of F(U ) are termed (local) sections of F over U . The restriction maps are typically denoted as res U V (s) = s| V for s ∈ F (U ). A presheaf F of Abelian groups on a topological space (X, τ ) is a sheaf precisely if for every open U ⊆ X and every open cover U = {U i } i∈I of U the following diagram is exact: By this we mean the following: • The map s → ( s| U i ) i∈I is injective, i.e. given that s| U i = 0 for all i ∈ I it holds s = 0.
• The image of the first map is the kernel of the double arrow. So for a family Next we pick a point p ∈ X. Given a sheaf F on the topological space (X, τ ), we consider pairs (U, s) p where U is an open neighbourhood of p and s ∈ F(U ). For such pairs we can define an equivalence relation is the stalk of the sheaf F in the point p. F p is an Abelian group.
By replacing Abelian group in the above lines by ring, module, algebra, . . . one defines along the very same lines the concept of sheaves of rings, modules, algebras . . . on a topological space.
Given a topological manifold X, for every open U ⊆ X the set of continuous functions f : U → R forms an Abelian group. Let res U V be the ordinary restriction of functions. Then we obtain from this data a sheaf on the topological manifold X. This sheaf of continuous functions O X is conventionally denoted by O X . By inspection, O X is even a sheaf of rings, i.e. {f : U → R , f continuous} possesses the structure of a ring, and the ordinary restriction of functions respects this ring structure.
Likewise, on a smooth manifold the smooth (real valued) functions give rise to the sheaf of smooth (real valued) functions. On a complex manifold the sheaf of holomorphic functions can be considered. As continuous/ smooth/ holomorphic functions are characteristic to the structure of topological/ smooth/ complex manifolds the above sheaves are referred to as the structure sheaf of the manifold.
All of the above structure sheaves are sheaves of rings. This observation leads to the concept of a ringed space. A ringed space is a pair (X, O X ) consisting of a topological space X and a sheaf of rings O X on X. On a ringed space it is possible to consider sheaves of O X -modules. Such a sheaf F on X assigns to every open U ⊆ X an O X (U )-module F(U ). In addition the restriction maps of F respect this module structure. Coherent sheaves are special such sheaves of O X -modules, as we will point out in Appendix D.6.
D.4. Localisation of Rings
Sheafification of modules on affine varieties (and eventually also toric varieties) makes use of localisation of rings. Therefore, let us use this subsection to recall the basics behind this procedure.
We consider a commutative unitial ring R and a multiplicatively closed subset 1 ∈ S ⊆ R. We now construct a new ring R S from this data. To this end define the following (equivalence) relation on R × S (r 1 , s 1 ) ∼ (r 2 , s 2 ) ⇔ ∃t ∈ S : t · (r 1 s 2 − r 2 s 1 ) = 0 (D. 27) and use it to consider the equivalence classes r 1 s 1 := [(r 1 , s 1 )] := { (r 2 , s 2 ) ∈ R × S | (r 1 , s 1 ) ∼ (r 2 , s 2 )} (D.28) We denote the collection of all of these equivalence classes by S −1 R = R S . It is readily verified that the binary compositions turn this set R S into a ring. This ring R S is termed the localisation of R at S.
Oftentimes one localises rings at prime ideals p ⊆ R. Recall that a prime ideal p ⊆ R in a commutative ring is a proper ideal (i.e. p = R) such that for all a, b ∈ R with ab ∈ p it holds a ∈ p or b ∈ p. In particular 1 / ∈ p for otherwise p = R in contradiction to p being proper. However, for the localisation R S we assumed 1 ∈ S! Therefore, by convention, localisation at a prime ideal p ⊆ R means to localise at the multiplicatively closed set R − p, i.e.
(D. 30) Another common type of localisation is at an element 0 = f ∈ R. By this we mean to form the set {1, f, f 2 , f 3 , . . . } and then to localise at this set S. The localisation at this set S, induced from 0 = f ∈ R, is denoted by R f .
Yet another common situation is to start with a graded ring R. Given a multiplicately closed set 1 ∈ S of homogeneous elements, one can perform the so-called homogeneous localisation. To this end we first define the degree of the equivalence class r 1 r 2 as deg Thereby one realises that the localisation R S is a graded ring. The homogenous localisation R (S) is now defined by R (S) := (R S ) 0 , (D. 32) i.e. R (S) consists of all elements of R S that have vanishing degree. This is easily generalised to homogeneous elements 0 = f ∈ R and homogeneous prime ideals p ⊆ R.
D.5. Sheafification of Modules on Affine Varieties
The general idea of sheafification on affine varieties is very simple: Given an affine variety X with coordinate ring R, we define a sheaf M associated to an R-module M by stating its ( where R f denotes the localisation of R at f , as introduced in Appendix D.4. Intuitively, R f consists of all (rational) functions on D(f ). In the remainder of this section we make these statements more precise. The reader not interested in all the technical details may safely jump directly to Appendix D.7.
Let us first recall the notion of an affine variety with a view towards algebraic geometry. To this end let R be a commutative unitial ring. Recall that an ideal m ⊆ R is termed a maximal ideal precisely if for every proper ideal a ⊆ R with m ⊆ a it holds m = a.
Whilst every maximal ideal is a prime ideal, the converse is not quite true. To see this consider the ring k[x] where k is an algebraically closed field. For every a ∈ k the ideal is a maximal ideal, and -provided that k is algebraically closed -all maximal ideals of k[x] are of this form. However, since k is free of zero divisors 33 the trivial ideal 0 is a prime ideal as well. Clearly this ideal is not maximal since 0 x ! Let us construct a topological space from R. A scheme-theoretic approach would consider 35) and equip it with the Zariski topology. As just pointed out, for an algebraically closed field k this means So besides containing all 'points' of k, this affine scheme contains also the so-called generic point 0 . Such points distinguish the scheme-theoretic approach from a variety-theoretic approach.
Here we present our results in the language of (toric) varieties over Q. However it is possible to formulate these findings also in the language of (toric) schemes [122][123][124][125].
In following [94] we therefore consider the set of maximal ideals The Zariski topology on Specm(R) is defined by saying that for every ideal a ⊆ R the following set is closed As an example let us consider the ring C[x]. Since C is algebraically closed we have an isomorphism of sets Specm(C[x]) ∼ = C. Next recall that every f ∈ C[x] has finitely many zeros. In addition, by the Hilbert Nullstellensatz, every ideal a ⊆ C[x] is finitely generated. All that said, we finally turn to the sheafification of an R-module M . This means that we wish to turn this module M into a sheaf M on the affine variety Specm(R) [105]. We describe this sheaf by merely stating its local sections over the basis of topology Z. Namely it holds Of course one has to check that these local assignments satisfy all conditions of a sheaf as stated in Appendix D.3. For further details see [105].
In particular the ring R is an R-module. It sheafifies to form the structure sheaf O Specm(R) ∼ = R. Therefore, since M ⊗ R R f is an R f -module, the sheaf M is a sheaf of O Specm(R) -modules.
D.6. Coherent Sheaves on (Abstract) Varieties
Let X be a topological space and O X a sheaf of rings on X. The pair (X, O X ) is termed a locally ringed space if for every p ∈ X the stalk O X,p is a local ring. 34 An abstract variety is a locally ringed space (X, O X ) such that for every p ∈ X there exists an open neighbourhood p ∈ U ⊆ X such that (U, O X | U ) is isomorphic (as locally ringed space) to (Specm(R), R) for a suitable commutative unitial ring R.
For a sheaf F of O X -modules on a variety (X, O X ) we define the following notions: • For an open subset U ⊆ X, the restriction • F is quasicoherent precisely if X admits an open affine cover 35 such that for every α ∈ I there exists an R α -module M α with the property M α ∼ = F| Uα .
• F is coherent if in addition for every α ∈ I the module M α is finitely presented.
D.7. Sheafification of F.P. Graded S-Modules on Toric Varieties
We will assume that X Σ is a toric variety over Q without torus factor. Let S = Q[x 1 , . . . , x m ] be its Cox ring. As we already mentioned, this ring is graded by Cl(X Σ ). Very much along the same lines as for affine varieties, an f.p. graded S-module M can be turned into a coherent sheaf M on X Σ . Here we briefly review this sheafification. Further details can be found in [94].
Recall that X Σ is defined by the combinatorics of a fan Σ. There are precisely m rays in the fan Σ. We denote the associated ray generators by ρ 1 , . . . ρ m . To each of these ray generators there is precisely one indeterminate x i ∈ S associated. We may assume that x 1 ↔ ρ 1 , x 2 ↔ ρ 2 . . . . Given a cone σ ∈ Σ we can now form the monomial The affine variety associated to this cone σ is given by U σ ∼ = Specm(Q[σ ∨ ∩ M ]). 36 In addition there is an isomorphism of graded rings π * σ : Consequently we can also understand the affine variety associated to the cone σ as U σ = Specm((S σ ) 0 ). Of course these affine varieties have to glue together in a meaningful fashion. The key observation is that for a face τ = σ ∩ m ⊥ of σ it holds (S x τ ) 0 = ((S x σ ) 0 ) π * σ (χ m ) . The commutativity of the following diagram then establishes the desired gluing: (D.43) 34 This means that the ring OX,p has a unique maximal ideal. 35 In this diagram we use the symbol | U to denote the restriction of a sheaf to an open subset U ⊆ X Σ . Recall that we have described this process (briefly) in Appendix D.6.
Since the module M described by the cokernel of ϕ : 0 0 − → S is isomorphic to S itself, this gives by definition the structure sheaf M = O P 1 Q . The global sections of the coherent sheaves Q[ x 1 x 2 ], Q[ x 2 x 1 ] and Q[ x 1 Hence, if we set t ≡ x 1 x 2 , we obtain from (D.47) the commutative diagram of sections: y y r r r r r r r r r r Q t, t −1 (D. 48) In this diagram, the maps res U V are the restriction maps of the sheaf M . For example res U 1 It is readily verified that the only such pairs are the diagonal elements (α, α) ∈ Q × Q.
Similarly the twist S(n) -considered as f.p. graded S-modules with trivial relation module -sheafifies to give the twisted structure sheaf O P 1 Q (n). Let us exemplify this statement. For n = 1, the commutative diagram of (local) sections takes the following shape: y y s s s s s s s s s s s s s s s s s s s s s s s s s res From this we see H 0 (P 1 Q , S(1)) ∼ = x 1 , x 2 Q . Similarly we find for n = −1 the commutative diagram: y y s s s s s s s s s s s s s s s s s s s s s s s s
D.8. More Properties of the Sheafification Functor
What we have described thus far is the functor which turns an f.p. graded S-module into a coherent sheaf on the toric variety X Σ . It is this functor which enables us to model coherent sheaves in a way suitable for computer manipulations as objects of the category S-fpgrmod. An important question is whether this description is unique. As it turns out, it is not.
Assume for a moment that X Σ were a smooth toric variety with irrelevant ideal B(Σ) ⊆ S. is an equivalence of categories, which therefore allows for an ideal parametrisation of Coh(X Σ ) [42,126].
In summary, any coherent sheaf F ∈ CohX Σ can be modelled by M ∈ S-fpgrmod -however not uniquely, i.e. in general there exist many non-isomorphic modules M with the property M ∼ = F.
We give an example on P 2 Q . In Appendix D.2 we discussed the f.p. graded S-module B Σ . Also recall also that S(0) can canonically be considered a f.p. graded S-module and that B Σ ∼ = S(0) (as f.p. graded S-modules). In addition we pointed out in Appendix D.7 that S(0) In Appendix D.10, we will describe our strategy to extract from an f.p. graded S-module M the sheaf cohomologies of M . In so doing we will point out why S is a perfect model for the structure sheaf of P 2 Q , whilst B Σ is not quite as good a choice.
D.9. Line Bundles from F.P. Graded S-Modules
Before we discuss the computation of sheaf cohomologies from f.p. graded modules, we first describe how we actually obtain modules that sheafify to the coherent sheaves in question. The major task from the main text is as follows: Be X Σ a normal toric variety that is either smooth, complete or simplicial, projective. Its Cox ring be S. Given a subvariety C = V (g 1 , . . . , g k ) ⊆ X Σ and D = V (f 1 , . . . , f n ) ∈ Div(C) -both not necessarily complete intersections -we want to construct an f.p. graded S-module M such that M ∈ Coh (X Σ ) is supported only over C and satisfies M | C ∼ = O C (±D).
To find a module M such that M is supported only over C and satisfies M | C ∼ = O C (−D) we proceed as follows: 1. The polynomials f i and g j are homogeneous. In consequence the ring S(C) := S/ g 1 , . . . , g k is graded by Cl(X Σ ) and the canonical projection map π : S ։ S(C) allows us to consider the matrix M = (π (f 1 ) , . . . , π (f n )) ∈ M (1 × n, S (C)) . By proposition 6.18 of [105] it holds A C ∼ = O C (−D).
2. As a next step we extend A C to become a (proper) coherent sheaf on X Σ . To this end note that the entries of the matrix ker(M ) are elements of S(C), i.e. are equivalence classes of elements of the ring S. For each entry of ker (M ) pick one representant in S. Thereby we obtain a matrix ker(M ) ′ with entries in S. 39 This enables us to construct the following f.p. graded S-module A: The coherent sheaf A could have support outside of C. To ensure that A C has been extended by zero outside of C, we tensor A with the structure sheaf O C of C. Given the matrix N = (g 1 , g 2 , . . . , g k ) T , the following f.p. graded S-module B sheafifes to give this structure sheaf O C : Consequently consider I D = A ⊗ S B ∈ S(C)-fpgrmod. It defines I D ∈ Coh(X Σ ) which is supported only on C and satisfies .
To obtain an f.p. graded S-module M such that M is zero outside of C and satisfies M | C ∼ = O C (+D), we perform step 1. This yields the module A C described above. Now we dualise this module in the category S(C)-fpgrmod, which means (c.f. Appendix D.11 for more details on the dualisation) With this f.p. graded S (C)-module we now proceed just as before, i.e. we turn A ∨ C into an S-module A ∨ and then compute the tensor product O D = A ∨ ⊗ S B. This module O D defines a coherent sheaf O D ∈ Coh (X Σ ) which is supported only on C and satisfies O D | C ∼ = O C (+D). D.10. Sheaf Cohomologies from F.P. Graded S-Modules Let X Σ be a normal toric variety which is either smooth, complete or simplicial, projective. Its Cox ring be S. By now we have described means to parametrise coherent sheaves F ∈ Coh(X Σ ) by f.p. graded S-modules. Given such an f.p. graded S-module M , we can wonder how we extract the sheaf cohomologies of M from the data defining M . Our algorithm is as follows: 1. Given that X Σ is smooth and complete or simplicial and projective, the cohomCalgalgorithm applies to it [97][98][99][100][101]. This enables us to compute the vanishing sets fairly rapidly. Note that the so-obtained vanishing sets serve as a properly refined version of the semigroup K sat introduced in [127] which was used in [128] to propose a means to compute sheaf cohomology of coherent sheaves.
2. Our algorithm now determines an ideal I ⊆ S and e ∈ Z ≥0 such that the pair (I (e) , M ) 40 satisfies a number of conditions. These conditions are phrased in terms of the sets V i (X Σ ) and designed such that See [129] for details on this isomorphism.
3. The i-th (global) extension module Ext i S (I, M ) of the f.p. graded S-modules I, M happens to be an f.p. graded S-module by itself. Its definition and properties are discussed in detail in Appendix D.11. We truncate it to degree 0 ∈ Cl(X Σ ). Since M is a coherent sheaf, this degree-0-layer happens to be a finite-dimensional Q-vector space. Our algorithm returns its Q-dimension.
The packages [106][107][108][109] provide the implementation of the category S-fpgrmod in the language of categorical programming of CAP [47][48][49]. In addition, basic functionality of toric varieties is provided by the gap-package ToricVarieties of [110]. The package [111] extends this package and provides routines to find an ideal I and e ∈ Z ≥0 which fit the criteria in the second step. In addition this package provides implementations of algorithms which allow for a quick computation of Ext i S (I, M ), as explained in Appendix D.11. Let us finally give an example on how these computations work in practise. To this end we pick P 2 Q again and consider the f.p. graded S-module B Σ . We already stated that this module sheafifies to give B Σ ∼ = O P 2 Q . Let us justify this statement by computing the sheaf cohomology dimensions of B Σ to see that they match up with this assertation. The first step of our algorithm computes the following vanishing sets: . . , f e n . 41 It is a coincidence that I happens to be the irrelevant ideal for projective spaces. For sufficiently more involved toric spaces, this is no longer true. Rather we pick an ideal associated to an ample divisor in XΣ, as this guarantees the existence of a finite e for which the isomorphism in (D.58) holds true.
This shows that indeed B Σ has the cohomology dimensions of O P 2 Q . To tell how good or bad a model for O P 2 Q this module B Σ really is, let us consider the sequence h i (e) = dim Q Ext
D.11. Computing Extension Modules in S-fpgrmod
Given a normal toric variety X Σ without torus factor, we have explained in Appendix D.7 that an f.p. graded S-module M serves as model for the coherent sheaf M on X Σ . In Appendix D. 10 we have outlined how one can extract the cohomology dimensions of M from the module M itself. This process involves the computation of the extension modules Ext i S (M, N ) for f.p. graded Smodules M, N . As the extension modules are f.p. graded S-modules by themselves, they can be truncated to any d ∈ Cl(X Σ ). In particular we have seen that the truncations Ext i S (M, N ) 0 are important for the computation of the cohomology dimensions. Let us therefore, in this section, provide details on the computation of Ext i S (M, N ) 0 . For the sake of simplicity, we assume that S = Q [x 0 , . . . , x n ] is a Z n -graded ring and start by investigating Hom S (M, N ) 0 first. Hence G is the pullback object of the morphisms (ρ ∨ M ⊗ id G N , id R ∨ M ⊗ ρ N ) and α is its canonical projection onto G ∨ M ⊗ G N . 42 2. Next compute the following pullback diagram in the category of projective graded Smodules: Finally, let us turn to the computation of the extension modules Ext i S (M, N ) and their truncations. To this end we first recall the theoretical foundations of this bivariate functor Ext n S (−, −). To this end let us pick a f.p. graded S-module B and use it to define an endofunctor G B of S-fpgrmod as follows: which means that Ext n S (−, −) is the n-th right-derived functor of the functor G B . This abstract statement can be made far more explicit. Namely, to compute Ext n S (A, B) (n ≥ 0) for two f.p. graded S-modules A, B we perform the following steps: 1. Consider a minimal free resolution F • (A) of A by projective objects in S-fpgrmod. Use this opportunity to recall that the projective objects in S-fpgrmod are the projective graded S-modules of finite rank. In addition we state it as a fact that such a resolution exists for every A ∈ S-fpgrmod. The resolution F • (A) will be denoted as follows: . . .
2. Now apply the functor G B to (D.72). Recall that this functor is a contravariant left-exact endofunctor of S-fpgrmod. Therefore we obtain the following complex in S-fpgrmod: . . . Recall that we want to compute the homology of the complex (D.73) at position n. Therefore we need to take into account both α n and α n+1 . To this end we compute the kernel embedding of the cokernel projection of the n-th morphism in the above resolution, i.e. of the morphism: It is readily verified that the so-obtained morphism µ : X → Y is given by The functor G B now induces the morphism µ : Hom S (Y, B) → Hom S (X, B). It turns out that the cokernel object of this morphism is isomorphic to the n-th homology of the complex (D.73) at position n. So coker( µ) ∼ = Ext n S (A, B). We compute µ based on the commutative diagram in Figure 5. | 2017-06-14T18:00:02.000Z | 2017-06-14T00:00:00.000 | {
"year": 2017,
"sha1": "b04e595ebbd2c5b4da10a1e34f556bffcd4d2f8a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2017)081.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b04e595ebbd2c5b4da10a1e34f556bffcd4d2f8a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255751040 | pes2o/s2orc | v3-fos-license | Pigs with an INS point mutation derived from zygotes electroporated with CRISPR/Cas9 and ssODN
Just one amino acid at the carboxy-terminus of the B chain distinguishes human insulin from porcine insulin. By introducing a precise point mutation into the porcine insulin (INS) gene, we were able to generate genetically modified pigs that secreted human insulin; these pigs may be suitable donors for islet xenotransplantation. The electroporation of the CRISPR/Cas9 gene-editing system into zygotes is frequently used to establish genetically modified rodents, as it requires less time and no micromanipulation. However, electroporation has not been used to generate point-mutated pigs yet. In the present study, we introduced a point mutation into porcine zygotes via electroporation using the CRISPR/Cas9 system to generate INS point-mutated pigs as suitable islet donors. We first optimized the efficiency of introducing point mutations by evaluating the effect of Scr7 and the homology arm length of ssODN on improving homology-directed repair-mediated gene modification. Subsequently, we prepared electroporated zygotes under optimized conditions and transferred them to recipient gilts. Two recipients became pregnant and delivered five piglets. Three of the five piglets carried only the biallelic frame-shift mutation in the INS gene, whereas the other two successfully carried the desired point mutation. One of the two pigs mated with a WT boar, and this desired point mutation was successfully inherited in the next F1 generation. In conclusion, we successfully established genetically engineered pigs with the desired point mutation via electroporation-mediated introduction of the CRISPR/Cas9 system into zygotes, thereby avoiding the time-consuming and complicated micromanipulation method.
Introduction
Diabetes mellitus is a major contributor to public health issues, and the number of patients with diabetes is increasing globally (Shaw et al., 2010). The pathogenesis of type 1 diabetes is caused by the immunological destruction of islet β cells producing insulin. Thus, pancreatic islet transplantation, without the need for insulin injections, is considered an effective treatment strategy (Atkinson and Eisenbarth, 2001). However, islet transplantation is limited due to the lack of islet donors (Frank et al., 2005).
The anatomical and physiological properties of pigs are similar to humans. Therefore, genetically modified pigs are expected to be ideal organ donors for xenotransplantation. Islets collected from pigs are an attractive alternative source for islet xenotransplantation. Thus, studies have been conducted on the microencapsulation of islets (Basta et al., 2011;Hillberg et al., 2013) and genetic modifications (Kemter et al., 2018) to protect porcine islets from the host immune system. Porcine insulin differs from human insulin by one amino acid at the carboxy-terminus of the B chain (alanine in pigs and threonine in humans) (Sonnenberg and Berger, 1983). The conversion of a single nucleotide at the given position of the porcine insulin (INS) gene (introduction of a point mutation), which changes G to an A at codon 54 (GCC, encoding alanine, to ACC, encoding threonine), converts porcine insulin to human insulin and enables the generation of genetically engineered pigs secreting human insulin , thereby establishing a suitable donor for the xenotransplantation of islets.
Gene editors-such as clustered regularly interspaced short palindromic repeats-associated protein 9 (CRISPR-Cas9) nucleases (Cong et al., 2013;Mali et al., 2013)-have enabled the insertion of precise base mutations into the genomic DNA of cells, including zygotes and embryonic cells (Inui et al., 2014;Yin et al., 2014;Yamamoto and Gerbi, 2018). After gene editors induce doublestrand breaks (DSBs) in DNA, non-homologous end-joining (NHEJ) or homology-directed repair (HDR) pathways act to repair the DNA (Kanaar et al., 1998). The NHEJ repair pathway can induce random insertions or deletions (indels) and disrupt the functions of targeted genes. By contrast, the precise repair of DNA after DSBs is achieved via the HDR pathway using donor DNA that has a homologous region from sister chromatids, homologous chromosomes, or exogenous DNA. CRISPR/Cas9-mediated HDR can be used to introduce a point (or small) mutation via the simultaneous introduction of a single-stranded oligodeoxynucleotide (ssODN), which has a right and left arm homologous with the target region, as the donor DNA encoding the desired point mutation (Inui et al., 2014).
In pigs, point mutations have either been introduced via the microinjection of gene editors with ssODN into zygotes/embryos (Zhou et al., 2016) or a somatic cell nuclear transfer (SCNT) technique using gene-edited somatic cells Montag et al., 2018;Li et al., 2020). Previously, we established the GEEP (gene editing by electroporation of Cas9 protein) method, which can successfully introduce the CRISPR/Cas9 system into porcine zygotes via electroporation, resulting in highly efficient disruption of the targeted gene (Tanihara et al., 2016). GEEP requires considerably less time with no need for advanced skills for micromanipulation. In mice, electroporation is widely used to transfer exogenous molecules for genome engineering, and generating knockin (Miyasaka et al., 2018) and point mutations Teixeira et al., 2018). In pigs, we previously succeeded in introducing a precise point mutation into porcine zygotes via electroporation (Wittayarat et al., 2021). However, there have been no reports concerning the generation of point mutated pigs via electroporation.
Since most DSBs generated by Cas9 are subjected to the NHEJ pathway, the efficiency of CRISPR/Cas9-mediated HDR is inherently low (Frit et al., 2014;Liu et al., 2018). In SCNT, the low efficiency of HDR is manageable by selecting somatic cells carrying the desired mutations. However, improvements in HDR efficiency are crucial for the one-step introduction of HDR-mediated mutations into zygotes via microinjection and electroporation. DNA ligase IV is a key enzyme of the NHEJ pathway and 5,6-Bis-(benzylideneamino)-2mercaptopyrimidin-4-ol (Scr7) is a DNA ligase IV inhibitor. Therefore, supplementation with Scr7 improves HDR efficiency by inhibiting the NHEJ pathway (Srivastava et al., 2012;Maruyama et al., 2015). Furthermore, HDR efficiency is affected by the homology arm length of the ssODN donor in porcine somatic cells and zygotes (Wittayarat et al., 2021). It is also crucial to optimize the ssODN length for efficient HDR before embryo transfer. In the present study, we optimized the HDR-mediated introduction of a precise point mutation into the INS target region of porcine zygotes using the GEEP method and generated pigs with INS point mutations.
2 Materials and methods
Animals
All animal care and experimental procedures were performed according to the guidelines for animal experiments of Tokushima University and the ARRIVE guidelines. Animal husbandry and anesthesia/euthanasia procedures were carried out as described previously (Tanihara et al., 2020a). The Prefectural Livestock Research Institute (Tokushima, Japan) provided two sexually mature Landrace gilts. Humane endpoints were defined as refusal of food or drink, symptoms of suffering, or decreased body weight resulting from the INS modification. In the present study, euthanasia was performed on a few INS-mutant piglets showing signs of poor general health conditions and extremely high blood glucose levels.
Oocyte collection, in-vitro maturation (IVM), and in-vitro fertilization (IVF)
Oocyte collection, IVM, and IVF were performed as described previously (Nguyen et al., 2017). Pig ovaries were collected at a local slaughterhouse from prepubertal gilts. We collected cumulus-oocyte complexes and cultured them in a maturation medium for 44 h. The matured oocytes were co-incubated with frozen-thawed ejaculated spermatozoa (1 × 10 6 cells/ml) for 5 h in a porcine fertilization medium (Research Institute for the Functional Peptides Co., Yamagata, Japan), and subsequently cultured in porcine zygote medium (PZM-5; Research Institute for the Functional Peptides Co.) for 7 h prior to gene editing using electroporation. The oocytes were then incubated in a humidified incubator at 39°C and 5% CO 2 .
Design of gRNA sequence
Alt-R CRISPR crRNAs and the tracrRNA system, supplied by Integrated DNA Technologies (IDT; Coralville, IA, United States), were used for guide RNA (gRNA). The CRISPR direct web tool (https://crispr.dbcls.jp/) was used to design the gRNA sequence (Naito et al., 2015). To minimize off-target effects, the COSMID web tool (https://crispr.bme.gatech.edu/) (Cradick et al., 2014) was Frontiers in Cell and Developmental Biology frontiersin.org 02 used to confirm that the 14 nucleotides at the 3′end of the designed gRNAs matched the target regions of INS genes.
Electroporation and in-vitro culture
Electroporation with ssODN was performed as described previously (Wittayarat et al., 2021). The fertilized zygotes were placed in a line in the electrode gap in a chamber slide (LF501PT1-20; BEX, Tokyo, Japan) filled with Nuclease-Free Duplex Buffer (IDT) containing 100 ng/μl gRNA targeting the porcine INS gene, 100 ng/μl Cas9 protein (Guide-it Recombinant Cas9; Takara Bio, Inc., Shiga, Japan), and 16 pmol/μl ssODN. Thereafter, zygotes were electroporated (five 1-ms pulses at 25 V) using a CUY21EDIT II electroporator (BEX). After electroporation, to examine the genotypes of the resulting blastocysts along with the zygotes' competence to develop into the blastocyst stage, zygotes were cultured either for 12 h in PZM-5 until embryo transfer or for 3 days in PZM-5, followed by 4 days in porcine blastocyst medium (Research Institute for the Functional Peptides Co.). Zygotes and embryos were incubated in a humidified incubator at 39°C with 5% CO 2 , 5% O 2 , and 90% N 2 .
Analysis of targeted gene sequence after electroporation
Genomic DNA was isolated from individually collected blastocysts, and the genomic regions flanking the gRNA target site were PCRamplified using KOD One PCR master mix (Toyobo, Osaka, Japan) and the specific primers: 5′-AGGACGTGGGCTCCTCTCTC-3′ (forward) and 5′-GGGCCTTGACTCCGTAAGAT-3′ (reverse). The PCR products were extracted using agarose gel electrophoresis, and the genotype of each blastocyst was analyzed using Sanger sequencing followed by application of the TIDE (tracking of indels by decomposition) bioinformatics package (Brinkman et al., 2014).
Embryo transfer
The preparation of recipient gilts for embryo transfer was performed as described previously (Onishi et al., 2000). Four to 7 weeks after mating, pregnant gilts were administered .2 mg of cloprostenol (Planate; MSD Animal Health, Tokyo, Japan) via intramuscular (i.m.) injection. After 24 h, the gilts were administered a second i.m. injection of .2 mg cloprostenol and 1000 IU eCG (PMSA for Animal, ZENOAQ, Fukushima, Japan). Then, 72 h after the eCG injection, recipient gilts were administered an i.m. injection of 1500 IU hCG (Gestron 1500, Kyoritsu Seiyaku) to induce the estrus. Subsequently, 100 embryos electroporated 12 h before embryo transfer were transferred into each oviduct of a recipient gilt 72 h after the hCG i.m. injection, resulting in the transfer of 200 embryos per gilt.
Mutation analysis in blastocysts and piglets using deep sequencing
Individually collected blastocysts and tissue samples were used to isolate genomic DNA. Following the manufacturer's instructions, two-step PCR using specific primers and index PCR primers was performed to amplify the genomic regions flanking the gRNA target site (Illumina, Hayward, CA, United States; Supplementary Table S1). The amplicons were submitted to MiSeq sequencing using the MiSeq Reagent Kit v. 2 after gel purification (250 cycles; Illumina). CRISPResso2 (Clement et al., 2019) was used for data analysis. To minimize false-positive classification, indels were assessed within a five base pair (bp) window surrounding the predicted cleavage site (Gaj et al., 2017). Genomic DNA isolated from wild-type (WT) pig-derived ear biopsies was used as the control. Sequencing errors were defined as a small number of amplicons carrying different sequences that were also observed in the control sample.
Off-target analysis using deep sequencing
An off-target analysis was performed as described previously (Tanihara et al., 2016). Genomic DNA of delivered pigs derived from electroporated zygotes and of two control WT pigs were individually used as templates for PCR. Potential off-target sites were chosen using the COSMID webtool (Cradick et al., 2014), which ranks off-target sites based on the number and position of mismatches. The six top-ranked potential off-target sites were analyzed using deep sequencing with specific primers and index PCR primers (Supplementary Table S2), as described in Section 2.7.
2.9 Experimental design 2.9.1 Experiment 1: Effect of Scr7 concentration on point mutation introduction efficiency First, we designed gRNAs targeting porcine INS (target sequence: 5′-TCTACACGCCCAAGGCCCGT-3′) and ssODN carrying right and left 40-bp homology arms as homology donors to optimize the efficiency of the HDR-mediated introduction of a precise point mutation. A mixture of gRNA targeting the porcine INS gene, Cas9 protein, ssODN, and Scr7 (Xcessbio Biosciences, Inc., San Diego, CA, United States) was used to induce single amino acid conversion via electroporation in porcine zygotes. As a target, we attempted to change GCC, encoding alanine, at codon 54 to ACC, encoding threonine ( Figure 1A). To evaluate the efficiency at which Scr7 introduced a point mutation, we electroporated gRNA, Cas9 protein, and ssODNs with multiple concentrations of Scr7 (.5, 1, 2, and 4 μM) or without Scr7 (0 μM) into porcine putative zygotes, 12 h after the start of IVF. After in-vitro culture for 7 days, the resulting blastocysts were subjected to genotype analysis, as described above. Blastocysts that carried only the WT sequence were classified as WTs. Blastocysts carrying the desired point mutation without other mutations were classified as those with a point mutation. Blastocysts that carried more than one type of mutation (insertion and/or deletion) near the gRNAtargeting site were classified as indels ( Figure 1B).
Experiment 2: Effect of homology arm length on point mutation introduction efficiency
The length of ssODNs as homology donors affects HDR efficiency Wittayarat et al., 2021). We designed six ssODNs with homology arms of different lengths to optimize the efficiency of HDR-mediated point mutations ( Figure 1C). We introduced Cas9 protein with gRNA and Frontiers in Cell and Developmental Biology frontiersin.org 16 pmol/μl of each ssODN into in-vitro-fertilized zygotes via electroporation. After in-vitro culture for 7 days, we evaluated the frequency of detected mutations in the INS target region of the blastocysts. Blastocyst genotypes were classified according to the criteria of embryo genotypes described in 2.9.1.
Experiment 3: Generation and analysis of genetically modified pigs
Next, we investigated whether genetically modified pigs carrying point mutations can be generated from electroporation-mediated point-mutated zygotes. Using electroporation, we introduced the Cas9 protein, gRNA targeting the INS gene, ssODNs carrying the 40-bp homology arms, and 1 µM Scr7 into the zygotes. The zygotes were then transferred to the two recipients~12 h after electroporation or cultured until the blastocyst stage. After delivery of piglets, we carefully monitored the body condition of the piglets, and performed euthanasia based on signs of prostration. The genotypes of and off-target effects in the delivered piglets were analyzed using deep sequencing. To evaluate the gene editing outcomes accurately, the genotypes of genetically modified blastocysts were also analyzed.
Furthermore, we investigated whether the introduced point mutations were inherited by the next generation. The pig with the successful introduction of the desired point mutation was mated with a WT boar. The genotypes of the F1 piglets were analyzed using Sanger sequencing and TIDE.
Statistical analysis
All percentage data were subjected to arcsine transformation, then analyzed by analysis of variance followed by Fisher's protected least significant difference test. The percentage of mutated blastocysts was analyzed using chi-squared tests with Yates' correction. We used the StatView software (Abacus Concepts, Berkeley, CA, United States) for statistical analysis. Differences with a probability value (p) ≥ .05 were considered statistically significant, and those with a probability value ≥.1 were considered marginally significant.
Study approval
The Institutional Animal Care and Use Committee of Tokushima University approved the animal experiments conducted in the present study (approval number: T2019-11).
Experiment 1: Effect of Scr7 concentration on point mutation introduction efficiency
The concentration of Scr7 did not have a statistically significant effect on blastocyst formation rates from electroporated zygotes (Figure 2A). Of the zygotes electroporated with ssODNs, 5.6%-20.0% developed into Frontiers in Cell and Developmental Biology frontiersin.org blastocysts carrying a point mutation ( Figure 2B). The ratio of blastocysts that carried a point mutation to the total number of examined blastocysts in the group treated with 1 μM Scr7 (20.0%) was higher (p < .1) than that in the groups treated with 2 μM (6.7%) and 4 μM (5.6%) Scr7.
Experiment 2: Effect of homology arm length on point mutation introduction efficiency
The development of electroporated zygotes into blastocysts was statistically unaffected by the length of the ssODNs ( Figure 3A). The genotypes of the blastocysts, determined using TIDE, showed that blastocysts treated with gRNA and ssODN having 20-, 30-, 40-, 60-, and 80-bp homology arms carried the desired point mutation in the INS target region (at rates of 12.5%, 37.9%, 42.5%, 3.7%, and 16.1%, respectively; Figure 3B). The proportion of blastocysts carrying point mutations was significantly higher (p < .05) in zygotes wherein ssODNs with 40-bp homology arms were introduced when compared to that with 20-, 60-, and 80-bp homology arms. Furthermore, the highest rate of blastocysts carrying point mutations without indels was observed in zygotes with ssODNs with 40-bp homology arms (12.5%).
FIGURE 2
Rates of blastocyst formation (A) and genotypes of blastocysts (B) developed from zygotes electroporated with the CRISPR/Cas9 system targeting the INS gene and single-stranded oligodeoxynucleotide (ssODN) with various Scr7 concentrations. The percentage of point mutations and indels indicates the ratio of the number of blastocysts carrying a point mutation/indel to the total number of blastocysts examined. Five replicate trials were carried out, and numbers above the bars indicate total number of blastocysts examined. Point mutation without indels, blastocyst carrying a point mutation without another mutation; Indels, blastocyst carrying mutations around the gRNA-targeting site; WT, wild-type. Error bars indicate the mean ± SEM (A). *p < .1.
FIGURE 3
Rates of blastocyst formation (A) and genotypes of blastocysts (B) developed from zygotes electroporated with the CRISPR/Cas9 system targeting the INS gene and single-stranded oligodeoxynucleotide (ssODN) with different homology arm lengths. The percentages of point mutations and indels indicate the ratio of the number of blastocysts carrying a point mutation/indel to the total number of blastocysts examined. Four replicate trials were carried out and numbers above the bars indicate the total number of blastocysts examined. Point mutation without indels, blastocyst carrying a point mutation without another mutation; Point mutation with indels, blastocyst carrying a point mutation with another mutation; Indels, blastocyst carrying mutations around the gRNA-targeting site; WT, wild-type. Error bars indicate the mean ± SEM (A). a-c p < .05.
Frontiers in Cell and Developmental Biology frontiersin.org 05
Experiment 3: Generation and analysis of gene-modified pigs
We electroporated gRNA, Cas9 protein, and ssODNs carrying 40bp homology arms with 1 μM Scr7 into zygotes and the genotypes of the resulting genetically modified blastocysts were analyzed using deep sequencing to evaluate mosaicism (Figure 4). Four of thirteen blastocysts carried the mosaic mutation, including desired point mutated alleles. Two of thirteen blastocysts carried the desired point mutation biallelically.
Next, we electroporated gRNA, Cas9 protein, and ssODNs carrying 40-bp homology arms with 1 μM Scr7 into zygotes and then transferred them into the oviducts of two recipient gilts. One hundred embryos were transferred into each oviduct of a recipient gilt. Both recipients became pregnant and gave birth to five piglets. Deep sequencing analysis of the target sites of INS genomic regions in ear biopsies revealed that all piglets carried INS mutations ( Figure 5). Piglets #1 and #5 had the desired point mutations and showed normal growth (deep sequencing analysis using ear samples demonstrated that #1 carried only the desired point mutation; #5 carried the desired point mutation allele and a 1-bp deletion allele). In contrast, piglets #2, #3, and #4 had biallelic frameshift mutations (#2 carried a 32-bp insertion allele; #3 carried a 7-bp deletion with 1-bp modification, 5-bp deletion, and 2-bp deletion alleles; and #4 carried only a 2-bp insertion Deep-sequencing analysis of the INS target regions in blastocysts obtained using the same procedure adopted to generate genetically modified pigs carrying point mutations. * The target and PAM sequences of gRNA are indicated in blue and red, respectively. Inserted and modified sequences are represented by green and pink, respectively. ** The frequency was calculated by dividing the number of amplicons by the total number of reads. *** The mutation rate was calculated by dividing the total number of mutant amplicons by the total number of reads.
FIGURE 5
Deep-sequencing analysis of the INS target regions in ear biopsies from delivered piglets. * The target and PAM sequences of gRNA are indicated in blue and red, respectively. Inserted and modified sequences are represented by green and pink, respectively. ** The frequency was calculated by dividing the number of amplicons by the total number of reads. *** The mutation rate was calculated by dividing the total number of mutant amplicons by the total number of reads.
Frontiers in Cell and Developmental Biology frontiersin.org allele), which indicated knockout of the INS gene. Then, we performed off-target analysis using deep sequencing, thereby indicating no detected differences in six potential off-target sites without OT1 (Table 1). Off-target analysis of OT1 in pigs #1, #2, and #3 detected approximately 38%-49% of the modified sequence. However, the same modification (2-bp deletion) was detected in one of the WT genomic DNA fragments, which was considered a monoallelic single nucleotide polymorphism. Piglet #3 died soon after birth. Piglets #2 and #4 suffered from poor health conditions with significant prostration (Supplementary Figure S1A) and exhibited high blood glucose levels 1 day after birth (Supplementary Figure S1B). In accordance with the humane endpoint, early euthanasia was performed on piglets #2 and #4 1 day after birth. Macroscopic necropsy analysis indicated that no abnormalities were detected in major organs, including the pancreas (Supplementary Figure S1C). Next, we performed an inheritance analysis of INS-point-mutant pigs. Pig #1 had a leg injury due to an accident; therefore, pig #5 was mated with a WT boar. Eleven F1 piglets were delivered, and eight piglets carried the desired point mutations in INS. Three of the eleven F1 piglets carried a monoallelic mutation (1-bp deletion) in INS, which was the same mutation seen in pig #5.
Finally, we performed deep sequencing analysis of major organs derived from resulting pigs to evaluate mosaicism accurately ( Figure 6). The genotypes (mosaicism) of the major organs in pigs #2 to #5 were similar to that of the ear samples. However, the genotype of pancreas sample in pig #1 showed mosaic mutation although other organs, including the ear revealed biallelic point mutation.
Discussion
Recently, the generation of point mutations has been widely used to evaluate the functions of enzymes, transcription factors, and signaling molecules. Additionally, the introduction of a desired point mutation in the genomic DNA of an animal model is a novel strategy to mimic intractable human diseases caused by point mutations, such as cancer (Schook et al., 2015), amyotrophic lateral sclerosis (De Giorgio et al., 2019), and Parkinson's disease (Blandini and Armentero, 2012). Therefore, the introduction of point mutations in pigs has the potential to contribute to human medicine. Previous studies using gene editors demonstrated the generation of pointmutated pigs by SCNT using somatic cells carrying the desired point mutations Montag et al., 2018;Li et al., 2020) or the microinjection of gene editors and DNA donors into zygotes (Zhou et al., 2016). In the present study, electroporation successfully enabled the HDR-mediated introduction of a point mutation directly into porcine in-vitro-fertilized zygotes and the generation of genetically modified pigs carrying the intended point mutation.
Scr7 temporarily blocks NHEJ and enhances the frequency of HDR, resulting in improvements in the insertion efficiency of DNA fragments at the target loci cleaved by the CRISPR/Cas9 system (Maruyama et al., 2015). In the present study, we simultaneously introduced Scr7 with CRISPR/Cas9 and ssODN into porcine zygotes to improve the efficiency of point mutation introduction. Electroporation of the CRISPR/Cas9 system with 1 μM Scr7 improved the efficiency of point mutation introduction in porcine zygotes, but no apparent effect was observed. We previously evaluated the effect of Scr7 on increasing HDR efficiency targeting another gene, KRAS, wherein no improvement was observed in HDR efficiency upon the addition of Scr7 during electroporation. The results of the present and previous studies indicate that the addition of Scr7 during electroporation is limited. Previous studies have reported that 1 μM of Scr7 is an effective concentration for increasing HDR (Maruyama et al., 2015;Ma et al., 2016), whereas some studies demonstrated no significant increase in HDR efficiency with Scr7 treatment (Lee et al., 2016;Song et al., 2016;Park et al., 2017). In the studies that demonstrated Scr7 efficacy, Scr7 was directly injected into zygotes using a glass capillary or introduced into cell lines by culturing for 24 h in Scr7 supplemented medium. The improvement of HDR in embryos cultured with Scr7 after electroporation needs to be evaluated. However, the average percentage of blastocysts from zygotes electroporated with Scr7 was lower than that from zygotes electroporated without Scr7. Another study indicated that higher concentrations of Scr7 reduce cell growth by inhibiting the cell cycle (Maruyama et al., 2015). The cell cycle of zygotes and embryos may also be affected by Scr7, indicating that toxicity should also be evaluated. *Substitutions, deletions, and insertions in each off-target (OT) site were assessed within a 5-bp window around the predicted Cas9 cleavage site using CRISPResso2. The percentages of sequences were calculated by dividing the read-number of sequences with substitutions, deletions, or insertions by the total read-number of sequences. Ear tissues from two wild-type pigs (control 1 and 2) or geneedited pigs (pigs #1 to #5) were used as samples.
Frontiers in Cell and Developmental Biology frontiersin.org Other NHEJ inhibitors and HDR enhancers should be evaluated in porcine zygotes to improve the efficiency of introducing point mutations. RS-1 is a HDR enhancer that enhanced Cas9-and TALEN-mediated knock-in efficiency in rabbit embryos (Song et al., 2016). Furthermore, L755507, a β3-adrenergic receptor agonist, enhanced CRISPR/Cas9-mediated HDR efficiency in human induced pluripotent stem cells (Yu et al., 2015). Using multiple gRNAs, which overlap by at least five base pairs in target sites, enhanced CRISPR/Cas9-mediated knock-in efficiency in mice (Jang et al., 2018). In the present study, Scr7 had no apparent effects on Deep-sequencing analysis of the INS target regions in major organs derived from delivered pigs. * The target and PAM sequences of gRNA are indicated in blue and red, respectively. Inserted and modified sequences are represented by green and pink, respectively. ** The frequency was calculated by dividing the number of amplicons by the total number of reads.
Frontiers in Cell and Developmental Biology frontiersin.org 08 point mutation introduction, but the combination of these strategies with or without Scr7 could improve HDR-mediated gene modification in porcine zygotes.
The rational design of ssODN donors is a key parameter to promote HDR pathway activity. Although ssODNs with longer homology arms can be used to maintain sufficient homology through the prevention of exonuclease degradation, longer ssODNs increase the risk of possible secondary structures, thereby decreasing the number of donors available for HDR Howden et al., 2015). Our results demonstrated that the elongation of homology arms decreased HDR-mediated gene modification, which is consistent with the results of previous studies in pigs Wittayarat et al., 2021). The adequate homology arm length of ssODNs is an essential parameter to consider.
Moreover, we did not evaluate the optimal concentration of ssODNs. Using the microinjection method, higher concentrations of ssODNs were shown to reduce HDR-mediated gene modification in porcine zygotes (Zhou et al., 2016). In this previous study, the authors considered that higher amounts of ssODNs probably stimulated NHEJ pathways; therefore, the DSBs introduced by the CRISPR/Cas9 system were repaired preferentially by the NHEJ process instead of HDR. Furthermore, the design of ssODNs has the potential to improve the HDR-mediated introduction of point mutations. In the CRISPR/Cas9 system, mismatches in gRNA sequences with DNA-targeting sites around the PAM-proximal region significantly reduced gene editing efficiency (Hsu et al., 2013). The design of ssODNs carrying silent mutations around the 3′end of the gRNA sequence and PAM sequence prevent re-cutting of the target region by CRISPR/Cas9 after HDR events, and result in an improved introduction of point mutations (Zhou et al., 2016). In the present study, we did not design no silent mutations in ssODN. Therefore, recutting of the targeting site would reduce the efficiency of a successful introduction of point mutations. Further optimization of ssODNs should be considered to improve HDR efficiency in porcine zygotes and embryos.
CRISPR/Cas-mediated base editor system, which is another approach to introduce a point mutation at a precise position independent of HDR generates mutations at a single-base level (Komor et al., 2016;Gaudelli et al., 2017). Although the currently available base editors have a limited editing window, electroporationmediated base editing has also been demonstrated in mice zygotes . Electroporation of base editors into porcine zygotes will be an effective strategy for introducing a single point mutation.
In the present study, we generated five genetically modified pigs carrying various INS mutations, including the desired point mutations. Although several factors such as, the target gene, viability of embryos, quality of frozen thawed sperm, and the condition of recipient gilts, affect the resulting litter size; the reason for the low litter size in the present study is unknown. The results of the deep sequencing analysis using ear biopsies and major organs indicated that piglets #2, #3, and #4 carried only frameshift mutations as a result of failure of introducing point mutations. They also exhibited a lethal phenotype with elevated blood glucose levels due to knockout of the INS gene. These piglets had macroscopically normal pancreases. Previous histological studies have proven that the pancreas formed in INS-deficient pigs completely lack insulin, resulting in a lethal phenotype (Cho et al., 2018). However, our resulting piglet #5 carrying only the desired point mutation and frameshift mutation (−1 bp) did not have this lethal phenotype, which suggests the secretion of functional humanized insulin.
The results of the deep sequencing analysis using ear biopsies indicated mosaicism in piglets #2 and #3. Deep sequencing analysis in major organs also demonstrated similar mosaic genotypes. However, in pig #1, the genotype of pancreas showed mosaicism, including WT alleles, which was not detected in ear samples. In pigs, mosaicism in the germ line is particularly problematic because maintaining resulting pigs until sexual maturity is time-and labor-consuming, and costly. In the present study, mosaicism was also often observed in genetically modified blastocysts (Figure 4). The strategies for reducing mosaicism are critical. High frequency of mosaicism is a major concern for electroporation-mediated gene editing in porcine zygotes (Tanihara et al., 2016;Tanihara et al., 2018;Tanihara et al., 2020b). We showed that careful selection of efficient gRNA is an effective way to reduce mosaicism in genetically modified pigs (Tanihara et al., 2020a;Tanihara et al., 2021). As described above, the improvement of HDR efficiency is also effective to reduce mosaicism by avoiding failure to introduce point mutation. Furthermore, the elevation of Cas9 concentration improves the efficiency of gene editing in mutant blastocysts (Le et al., 2020;Tanihara et al., 2021). These optimizations of the zygotic gene editing system may reduce undesired mosaicism.
However, the improvement of gene-cutting efficiency by CRISPR/ Cas9 will affect off-target events. Off-target effects, such as an unexpected DNA cleavage caused by the binding of gene editors to unintended genomic sites, is a major concern in gene editing. In the present study, we designed gRNAs using the COSMID webtool to minimize off-target effects. Modified sequence identified in pigs #1, #2, and #3 was considered a monoallelic single nucleotide polymorphism, because the same modification was detected in one of the WT genomic DNA fragments. Therefore, no off-target events were observed in F0 pigs. In our previous study, the increasing concentration of CRISPR/Cas9 components was effective in increasing gene editing efficiency without off-target events (Le et al., 2020). However, further investigation is crucial, especially for clinical applications in humans that require precise gene modification. To minimize the off-target effects and improve practical gene editing, the latest approaches were developed, including off-target detection by algorithmically-designed software and genome-wide assays, cytosine or adenine base editors, prime editing, Cas9 variants including dCas9 and Cas9 paired nickase, and the chemical modification of gRNA (Naeem et al., 2020). A highfidelity Cas9 mutant also resulted in high on-target activity while reducing off-target effects in human cells (Vakulskas et al., 2018). However, the potential for off-target events cannot be completely eliminated. We should minimize off-target events by utilizing the latest developed strategies in founder generations, and evaluate possible off-target events of non-mosaic genetically-modified lines prior to clinical application on humans.
In conclusion, we have developed a new approach for generating genetically modified pigs with desired point mutations by electroporating the CRISPR/Cas9 system into zygotes, thereby avoiding the time-consuming and complicated micromanipulation method. This point mutation was successfully inherited in the next F1 generation. Thus, we successfully established an islet donor strain for pig-to-human xenotransplantation through the electroporationmediated introduction of point mutations into zygotes. However, the efficiency of introducing point mutations is still low. Therefore, efficient practical application to improve HDR-mediated gene Frontiers in Cell and Developmental Biology frontiersin.org modification in porcine zygotes and embryos requires further optimization.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by Institutional Animal Care and Use Committee of Tokushima University (approval number: T2019-11). | 2023-01-13T14:05:11.582Z | 2023-01-13T00:00:00.000 | {
"year": 2023,
"sha1": "99e355d438eb1160ede3ee6fcf6eb5e2f8cb030f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "99e355d438eb1160ede3ee6fcf6eb5e2f8cb030f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
92202741 | pes2o/s2orc | v3-fos-license | Anatomy of the spinal cord of Alouatta belzebul
Article history The genus Alouatta hosts species popularly known as red-handed howler, presenting wide geographic distribution and being found in several biomes. The objective is to describe the anatomy of the spinal cord of Alouatta belzebul specimens, focusing on the topography of the medullary cone, stressing the cervical and lumbar intumescences and cauda equina, to provide anatomical data and compare it with other species to assist in anesthetic and surgical procedures. Four animals were received for scientific research, post mortem, from the fauna rescue program of the Hydroelectric Plant of Belo Monte Pará, and they were fixed in 10% formaldehyde solution. Structures such as the medullary cone, cervical and lumbar intumescence, and cauda equina were photographed (Sony α200-10.2 mpx). After thawing, we measured the specimens and observed a size of 80 to 82 cm from head to toe. After the skin and musculature were removed, it was observed that the spine of all specimens presented 7 cervical, 13 thoracic, 5 lumbar and 3 fused sacral vertebrae. The spinal cord was exposed after the removal of vertebral arches, it has 22 cm length in all animals, presenting the cervical intumescence between C3 and C6 vertebrae, with average of 2.2 cm and lumbar intumescence between T11 and T12 vertebrae, with average of 1.65 cm. The medullary cone is located between T12 and L1 vertebrae, with average of 1.5 cm, and the cauda equina between L1 and S3, with an average of 15 cm. This study has an important role as the basis for epidural anesthesia in the species. Received 03 October 2017 Received in revised form 23 March 2018 Accepted 28 March 2018
INTRODUCTION
Advances in comparative animal anatomy possess fundamental importance due to the scarcity of information available in the literature.The anatomical descriptions subsidize comparative and evolutionary studies, since through these descriptions one can succeed in anesthetic procedures essential to diagnostic and surgical processes (SLULLITELL, 2008).
Regional anesthetic techniques are used with a suitable safety margin, aiming to anesthetize spinal nerves of the lumbar and sacral regions, thus the location of anesthetic application in the epidural space varies according to the species and the ending site of the spinal cord.The use of sites caudal to the medullary cone makes the application technique safer, consequently avoiding spinal cord injuries and helping professionals who need to perform surgical procedures (GREGORES et al., 2010;SOUZA et al., 2014).
Primates of the genus Alouatta are medium-sized mammals, yet they are considered robust animals and well-known for long-range vocalization.Their jaw accommodates a rather large hyoid bone, mainly in males, which forms an oval resonance chamber, responsible for the characteristic vocalizations related to group location and territory defense (HIRSCH et al., 1991).Research about these primates involves diet, vocalization, life style and ecology of the species (AGUIAR et al., 2003;BICCA-MARQUES et al., 2009;BICCA-MARQUES, 2003;CAMARGO, 2005;GRECORIN, 2006;MARTINS, 2002;NEVILLE et al., 1988).However, information regarding the comparative and/or evolutionary anatomy of these animals is scarce, as well as clinical correlations with the described structures.The red-handed howler Alouatta belzebul is endemic to Brazil, occurring in the states of Amazonas, Pará, Maranhão, Piauí, Ceará, Rio Grande do Norte, Paraíba, Pernambuco and Alagoas (BONVICINO, 1989;RUSSELL et al., 2013).It is a neotropical primate, pertaining to the family Atelidae.Due to the fundamentally folivorous feeding habit (NEVILLE et al., 1988), howlers present slow and discrete behavior and spend more than 70% of their time resting (SOUZA, 2005).
Habitat destruction and fragmentation has placed all
Alouatta species and subspecies in the endangered species category (BICCA-MARQUES, 2003;CHIARELLO et al., 2008;RUSSELL et al., 2013).Due to the huge threat, howler management tools as translocation and reintroduction are being increasingly observed and successfully applied in Alouatta conservation programs (JERUSALINSKY et al., 2010;SOUZA, 2005).The management techniques require ecological, behavioral and morphological data of the target species (STERLING; BYNUM; BLAIR, 2013).
The collection and analysis of cerebrospinal fluid (CSF) present a safe, viable and effective means of access and evaluation of the nervous system, for diagnosis and prognosis of its diseases, such as encephalopathies and myelopathies that affect the Alouatta (GAMA et al., 2005;TRANQUILIM et al., 2013).According to Bailey and Vernau (1997), this test can detect diseases in the central nervous system with reasonable sensitivity and low specificity and thus provide a general index of health of the system.
The use of anesthesia in wildlife management is a common practice in veterinary medicine, in which safe and effective drugs are used to contain the animal during the procedures (WOLFE-COOTE, 2005).Epidural anesthesia is a frequently used technique because of its good margin of safety and ease, showing itself as an effective and practical alternative in cases that the animals present risk factors to inhaled or intravenous anesthetics.This technique is used to anesthetize the spinal nerves of the lumbar and sacral regions and data about the application procedure and anatomy of the manipulated species are necessary (CARVALHO, 2004;GREGORES et al., 2010).
The place of application of the anesthetic in the epidural space varies according to the species, being related to the ending site of the spinal cord.However, the use of sites caudal to the medullary cone makes the application safer, thus avoiding spinal cord injuries (GREGORES et al., 2010).The objective of the present study was to describe the anatomy of the spinal cord of A. belzebul, emphasizing in its structure the vertebrae and intumescences location, as well as the morphology of the medullary cone, in order to contribute with anatomical bases to the practice of epidural anesthesia in this species.
MATERIALS AND METHODS
In the present study, four male A. belzebul specimens were collected from the Wild Animals Triage Center linked to the fauna rescue program of the Hydroelectric Plant of Belo Monte under license nº 473, frozen and subsequently donated to the Federal University of Goiás -Regional Jataí.
Prior to fixation, the specimens were thawed for partial dissection, starting an incision at the dorsal midline to remove the skin, from the cranial region to the base of the tail, separating it from the tissue underneath.After the skin was removed, the animals were fixed in 10% formaldehyde solution and kept in tanks with the same solution for preservation.Once fixed, the dorsal musculature and the vertebral arches were removed to expose the spinal cord.Then the cervical and lumbar intumescences, the medullary cone and cauda equina were exposed and measured with a pachymeter and photo documented with a Sony α200-10.2 mpx digital camera.The project was approved by the Ethics Committee in Animal Experimentation (CEUA) with protocol nº 083/17.The data described for cervical intumescence, lumbar intumescence, medullary cone (base and apex) and cauda equina, in their length and location, were compared with the literature available of other wild primates and mammals, according to the NAV (2017).
RESULTS AND DISCUSSION
After the dissection of the dorsal region, from the cranial region to the base of the tail, the spine of A. belzebul was evidenced and showed for all the specimens: 7 cervical vertebrae, 13 thoracic vertebrae, 5 lumbar vertebrae and 3 fused sacral vertebrae (Figure 1).A similar result was found by Silva et al. (2013) The cervical intumescence (IC) of A. belzebul is located between C3 and C6 vertebrae, measuring on average 2.2 cm in length (Figure 2B and Table 1).In A. belzebul, the lumbar intumescence (IL) is located between T11 and T12 vertebrae, with an average size of 1.65 cm in length (Figure 3).Compared with tayra, crabeating raccoon and other primate species, the IL of A.
belzebul presents its location more cranially.However, among the primates studied, a certain similarity is observed regarding the position and length of lumbar intumescence (Table 2).The medullary cone (CM) of A. belzebul is located between T12 and L1 vertebrae, measuring 1.5 cm in length (Figure 3A and 3B).The cauda equina is located between L1 and S3 vertebrae, presenting 15 cm in length (Figure 2A and 3C).The morphological differences of both the measurement and the topography of the IC, IL, CM and CE among the species described in the literature, show the importance of knowing the location of these structures to apply anesthetics in surgical procedures, which is specific for each species.Silva et al. (2013) states that epidural anesthetics should be applied in the lumbosacral region for Callithrix jacchus corroborating the findings of LA Salles et al. ( 2017) for this species, in Saguinus midas it should be applied in the spaces between the vertebral arches of the lumbosacral region (MARTINS et al., 2013), in Sapajus libidinosus, in the epidural space of the lumbosacral region (CORDEIRO et al., 2014) and in Saimiri sciureus in the girdle and pelvic limbs (LIMA et al., 2011a).For A. belzebul, epidural anesthesias should be applied in the lumbosacral region, specifically between L2 and S3 vertebrae.
CONCLUSIONS
To perform epidural anesthesia, it is necessary to take into account the information about the topography and measurement of IC, IL, CM and possible anatomical variations of the spinal cord.It was concluded that the cervical intumescence of A. belzebul (IC) is located between C3 and C6 vertebrae, the lumbar intumescence is located between T11 and T12 vertebrae, the cauda equina is located between L1 and S3 vertebrae.The anatomy of the medullary cone of A. belzebul shows that its base is located in the T12 vertebra and apex in the L1 vertebra, suggesting that the epidural anesthesia in this species should be performed in the lumbosacral region between the L2 and S3 vertebrae, differing from other species of primates described in the literature.
Figure 3 -
Figure 3 -A, B and C: Macro-photography of the dorsal view of the thoracolumbar region of A. belzebul, highlighting the Lumbar Intumescence region (IL); Medullary Cone region (CM); Cauda Equina region (CE).In red, the position of the vertebrae (T11, T12, L1, L5, S1 and S3) is observed.In B, the pachymeter to assist CM measurement.Bar: 1 cm.
Table 1
describes the results of IC measurement of A. Belzebul and other primate and mammal wild species.The comparative results of the IC of A. Belzebul are not similar to the topography of the IC of other primates and wild mammals observed in the literature.
Table 1 -
Measurement of the Cervical Intumescence of A. belzebul and other species of wild primates and mammals.
Table 2 -
Measurement of the Lumbar Intumescence of A. belzebul and other species of wild primates and mammals.
Table 3
describes the results of CM measurement of A. belzebul and other primates.The comparative results of CM of A. Belzebul do not present similarity to the topography of CM of other primates described in the literature.Table4describes the results of the CM measurement of A. belzebul in relation to other wild mammal species described in the literature.
Table 3 -
Measurement of the medullary cone of A. belzebul and other species of Primates.
Table 4 -
Measurement of the medullary cone of A. belzebul and other wild mammals. | 2019-04-03T13:06:35.155Z | 2018-07-26T00:00:00.000 | {
"year": 2018,
"sha1": "7089d2d54a82138156f6b83773c225e10b2265ca",
"oa_license": "CCBY",
"oa_url": "https://periodicos.ufersa.edu.br/index.php/acta/article/download/7349/9844",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7089d2d54a82138156f6b83773c225e10b2265ca",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
235687152 | pes2o/s2orc | v3-fos-license | Comparison of catheter-over-needle and catheter-through-needle methods in ultrasound-guided continuous femoral nerve block
Abstract Background: The catheter-through-needle (CTN) method involves the insertion of a catheter with an outer diameter smaller than the initial puncture hole. We investigated whether the catheter-over-needle (CON) method is more effective than the CTN method in local anesthetic leakage at the catheter insertion site and catheter dislodgement, and how it affects postoperative pain management. Methods: Seventy patients scheduled to undergo continuous femoral nerve block for pain control following total knee arthroplasty were enrolled and randomized to receive a perineural catheterization with either the CTN method (group CTN) or CON method (group CON). After ultrasound-guided catheterization, the transparent securement dressing was attached. The study compared the CON and CTN methods in terms of leakage at the catheter insertion site, catheter dislodgement, and postoperative analgesic efficacy for 48 hours postoperatively. Results: Leakage at the catheter insertion site was significantly lower in the group CON (P < .05), while catheter dislodgement was not significantly different between the groups. The other adverse events were not different between the groups. The procedure time was significantly shorter in group CON (P < .05). No significant intergroup differences were observed 48 hours postoperatively in the visual analog scales, the number of patients requiring additional analgesics, and the number of times a bolus dose was injected with an injection pump. Conclusion: The CON method was able to shorten the procedure time while reducing the incidence of leakage at the catheter insertion site than the CTN method, and showed similar effects in postoperative pain management.
Introduction
Postoperative pain management is one of the key components in enhanced recovery after surgery for total knee arthroplasty (TKA). [1] Patient-controlled analgesia using ultrasound-guided continuous femoral nerve block is known to reduce the duration of hospitalization and rehabilitation treatment by enabling early gait and joint movement by relieving severe pain immediately after surgery in patients undergoing TKA. [2] Serious complications associated with continuous peripheral nerve blocks are generally known to be rare. However, common complications include local anesthetic leakage and catheter dislodgement. The rates of catheter dislodgement are reported in the literature as 6% to 15%. [3,4] Leakage at the catheter insertion site not only reduces the volume of local anesthetic adjacent to the nerve, potentially causing block failure, but also induces disruption of the securement dressing, causing catheter dislodgement and potentially increasing infectious complications. [5] The conventional catheter-through-needle (CTN) method involves the insertion of a catheter into the needle and placing the catheter around the femoral nerve. The outer diameter of the catheter is smaller than that of the initial needle-punctured hole, and there is a possibility of local anesthetic leakage at the catheter insertion site and catheter dislodgement. To overcome these problems, a catheter-over-needle (CON) method was devised. This method inserts a needle over the catheter, places it around the nerve, and removes only the needle. The catheter fits tightly in the puncture hole, reducing the incidence of local anesthetic leakage and catheter dislodgement. [3,6] Previous studies on local anesthetic leakage and catheter dislodgement of the CON method have different results in several articles, and there is limited research on how the CON method is better in postoperative pain management than the conventional CTN method. This study aimed to investigate whether the CON method is more effective than the conventional CTN in local anesthetic leakage and catheter dislodgement and how it affects postoperative pain management and reduction of other adverse events. We hypothesized that the CON method would show less local anesthetic leakage and catheter dislodgement than the conventional CTN method, and would be better for postoperative pain management.
Patient enrollment
With the approval of the Institutional Review Board of the authors' Hospital (ID 05-2018-130), the trial was registered with Clinical Research Information Service (registered number: KCT0003509). After obtaining written informed consent, we enrolled 70 American Society of Anesthesiologists physical status I-III patients undergoing TKA. Patients with poor coordination, pregnant women, blood coagulation disorders, neurologic defects at the site, and allergic reactions to ropivacaine in previous surgeries were excluded.
Randomization
At the preanesthetic visit, all subjects were fully described on the randomization protocol, pain assessment using the visual analog scale (VAS), and how to use a portable electronic injection pump, and agreed to participate in the study. Random assignment to 2 groups of patients used a list of random numbers generated using Excel (Microsoft Corporation, Redmond, WA, USA). Patients underwent TKA with ultrasound-guided continuous femoral nerve blocking using either the CTN method (group CTN) or the CON method (group CON). The study was a double-blind, randomized controlled study. For randomization, the investigator who performed the procedure could not measure the outcome after surgery, and the outcome investigator was blinded to the procedure.
Catheter insertion procedure
Before induction of general anesthesia, all patients underwent the ultrasound-guided continuous femoral catheter insertion in the supine position with the leg slightly externally rotated. The femoral nerve was detected using a 5.0 to 13.0 MHz linear probe (LOGIQ e; GE Healthcare, Princeton, NJ, USA). After disinfecting the skin around the inguinal area with chlorhexidine-alcohol, group CTN (n = 35) had the catheter mounted under the femoral nerve using the CTN method, and group CON (n = 35) had it mounted using the CON method.
In group CTN, after infiltration of the needle insertion site with 3 to 4 mL of 2% lidocaine, a 10-cm, 18-gauge Tuohy needle (NRFit PlexoLong Nanoline Kit; Pajunk GmbH, Geisingen, Germany) was inserted and placed along the lower lateral part of the femoral nerve under ultrasound guidance. Electrocardiogram pads were placed 0 to 1 cm medial to the distal quadriceps tendon and attached to a nerve stimulator (Medipia ES400; Life-Tech, Stafford, TX, USA). Initial output of 1 mA, 2 Hz, and 0.2 ms was applied as the block needle was advanced along the lower part of the femoral nerve until quadriceps femoris muscle contractions were elicited, during which the nerve stimulator was turned off. A 20-gauge stimulating catheter (NRFit PlexoLong Nanoline Kit; Pajunk GmbH, Geisingen, Germany) was inserted through the needle. The catheter tip was localized at the lower mid-point of the femoral nerve, adjusted using the ultrasound image, and injected 1 to 2 mL of normal saline. If the tip of the catheter could not be seen, the process of catheter insertion and localization was repeated. After catheter placement, 10 mL of 0.2% ropivacaine was injected under ultrasound guidance to confirm that the local anesthetic diffused well around the nerves. To secure the catheter, the catheter insertion site was attached with a chlorhexidine gluconate transparent securement dressing (Tegaderm CHG; 3M Corporation, St. Paul, MN, USA). Thereafter, an additional 10 mL of 0.2% ropivacaine was injected through the catheter to examine whether local anesthetic leakage had occurred.
In group CON, in the same manner as in group CTN, a 5-cm, 18-gauge cannula with an indwelling 21-gauge needle (E-cath Plus; Pajunk GmbH, Geisingen, Germany) was inserted and placed along the lower lateral part of the femoral nerve under ultrasound guidance. Electrocardiogram pads were placed 0 to 1 cm medial to the distal quadriceps tendon and attached to a nerve stimulator, applied in the same way. A 21-gauge E-catheter with integrated tubing (E-cath Plus; Pajunk GmbH, Geisingen, Germany) was inserted through the indwelling 18-gauge cannula. The E-catheter tip was localized at the lower mid-point of the femoral nerve, adjusted using the ultrasound image, and injected 1 to 2 mL of normal saline. After catheter placement, 10 mL of 0.2% ropivacaine was injected under ultrasound guidance to confirm that the local anesthetic diffused well around the nerves. The catheter insertion site was attached with a chlorhexidine gluconate transparent securement dressing and then injected with an additional 10 mL of 0.2% ropivacaine to check for local anesthetic leakage.
Perioperative management
General anesthesia was performed using 6 vol% desflurane. At the end of the surgery, 225 mL of 0.2% ropivacaine was infused through the indwelling catheter via a portable electronic injection pump (Accumate 1100; Woo Young Medical Co., Ltd., Chung-Buk, Korea) for the first 48 hours after surgery in both groups. Both groups received a periodically at 4-hour interval dose at 5 mL of 0.2% ropivacaine, and a patient-requiring bolus dose at 5 mL with a lockout time of 30 min through the catheter using a portable electronic injection pump. All surgical procedures were performed by the same orthopedic surgeon. Before subcutaneous closure, the intra-articular injection was performed with 30 mL of 0.2% ropivacaine by the surgeon.
Outcome measurements
The primary outcome was leakage at the catheter insertion site under the transparent securement dressing, detected by visual inspection. Other catheter-related adverse events such as Kim et al. Medicine (2021) 100:26 Medicine dislodgement, kinking, knotting, and cutting were monitored and recorded for 48 hours postoperatively. The procedure time was defined as the time from infiltration of a local anesthetic to the time that a chlorhexidine gluconate securement dressing was applied over the catheter insertion site. An investigator who was blinded to the group assignments was assigned to assess postoperative pain quality using a VAS, as well as the incidence of patients requiring additional analgesics, totally consumed doses of local anesthetics, adverse events related to local anesthetics, and patient satisfaction regarding postoperative pain management. The VAS was recorded immediately after admission in the postanesthetic care unit, and at 1, 4, 12, 24, 36, and 48 hours postoperatively. When the VAS score was >60 and the patient wanted analgesics during the postoperative period, morphine 0.05 mg/kg was injected. Additional analgesic requirements within 48 hours after surgery were documented as the incidences of patients requiring additional analgesics by the investigator. Adverse events related to local anesthesia including nausea, vomiting, dizziness, hypotension, urinary retention, and paresthesia were noted. Patient satisfaction regarding postoperative pain management was assessed on a 5-point Likert scale as follows: [7] 5 = very satisfied, 4 = satisfied, 3 = neutral, 2 = dissatisfied, and 1 = very dissatisfied.
Sample size estimation
The primary outcome was the rates of leakage at the catheter that was placed for the ultrasound-guided, continuous femoral nerve block. In previous studies, the leakage rates in the CTN and CON methods were 55% and 0%, respectively. [8] In this study, assuming that the difference in the leakage rate was approximately 30% between the 2 methods, the sample sizes were measured to be 32 patients with type I (a) and type II (b) errors of 0.05 and 0.2, respectively. Taking into account the 10% dropout rate, the sample size was 35 patients in each group. Altogether 70 patients were recruited in this study.
Statistical analysis
Statistical analysis was performed using IBM SPSS Statistics for Windows, version 26.0 (IBM Corp., Armonk, NY, USA). For the demographic data, a Student t test was used for the numerical data, and the chi-squared test was used for the categorical data. A t test was used to compare VAS and total consumed local anesthetics. The incidences of adverse events, patient satisfaction with postoperative pain management, times of bolus injection using local anesthetic delivery injection pump, and rescue opioid administration were compared using the chi-squared test or Fisher exact test. P < .05 was considered statistically significant.
Results
Seventy patients were enrolled in this study. One patient in group CTN and 2 patients in group CON dislodged the perineural catheter within the first 24 hours after the surgery. They were excluded from the study and received postoperative pain management through intravenous patient-controlled analgesia using nonsteroidal anti-inflammatory and opioids for rescue analgesia. Two patients in group CTN did not want to continue this study after surgery; the remaining 65 patients completed the study (Fig. 1). Regarding demographic data, no differences were observed between the 2 groups in American Society of Anesthesiologists physical status, sex, age, height, weight, and anesthesia time. The procedure time was statistically shorter in group CON (P < .05, Table 1).
Leakage at the catheter insertion site occurred in 11 patients in group CTN and 2 patients in group CON, respectively. Leakage at the catheter insertion site was significantly lower in the group www.md-journal.com CON (P < .05). As mentioned above, catheter dislodgement occurred in 1 patient in group CTN and 2 patients in group CON, respectively. In 1 patient in group CTN, an "occlusion" alarm occurred in the portable electronic infusion pump, which was caused by kinking a catheter attached across the inguinal fold when the patient sat down. This was resolved by removing the fixation tape, straightening the kinking portion, and fixing it with new tape. There were no other adverse events related to the perineural catheter and no statistical significance between the 2 groups. Adverse events related to local anesthetics were not different between the groups (Table 2). No significant intergroup differences were observed in the VAS immediately after admission to the postanesthetic care unit, and at 1, 4, 12, 24, 36, and 48 hours after the surgery (Fig. 2). The incidence of patients requiring additional analgesics within 48 hours after surgery was not significantly different between the 2 groups ( Table 3).
The times a bolus dose was injected within 48 hours after surgery were not significantly different. There were also no significant differences between the groups in the total consumed dose of local anesthetics from the portable electronic injection All measured values are presented as mean ± standard deviation or number of patients. ASA = American Society of Anesthesiologists, CON = catheter-over-needle, CTN = catheter-throughneedle. * P < .05 compared with group CTN. (Table 4). Patient satisfaction regarding postoperative pain management was not significantly different between the 2 groups.
Discussion
This randomized, comparative study was undertaken to investigate whether the CON method is more effective than the conventional CTN method with regards to local anesthetic leakage and catheter dislodgement, and how it affects postoperative pain management and reduction in other adverse events.
The results of this study showed the leakage at the catheter insertion site was significantly lower in the CON method, however, no difference in the catheter dislodgement between the CTN and CON methods. And there were no differences in postoperative pain management and other various adverse events between the 2 methods. Complications such as local anesthetic leakage and catheter dislodgement are common in continuous peripheral nerve blocks. Incorrect positioning of catheters occurs in up to 40%, leading to a disruption of the dressing which can lead to catheter dislodgement and potentially increase infective complications. Leakage reduces the volume of local anesthetic adjacent to the nerve, potentially causing block failure. Inadvertent catheter dislodgement is another common complication of continuous catheter techniques. [9,10] In the study, the CON method showed 6.1% leakage and 6.1% dislodgement, whereas the CTN method showed 34.4% leakage and 3.1% dislodgement. As shown in the results, it can be seen that the CON method had little leakage. The catheter in the CON method has a diameter larger than that of the puncture needle. This may increase resistive forces when traction is unintentionally applied to the catheter, decreasing the chance of dislodgement. [6] In addition, we used a transparent securement dressing attached to sticky gel-type chlorhexidine gluconate at the catheter insertion site that reportedly prevents such leakage during continuous infusion. These 2 steps prevented leakage at the catheter insertion site and catheter dislodgement in the CON method.
In a recent study, leakage at the catheter insertion site was observed in 55% of patients using the CTN method, whereas there was no leakage using the CON method. [8] Although it was dependent on different operators and different insertion sites, leakage at the catheter insertion site has been reported to occur in 3% to 30% of perineural catheters using the CTN method. [11] As reported in several articles, the CTN method has the problem of leakage at the catheter insertion site, so suturing the catheter has been mainly used to resolve this problem. In addition to suturing the catheter, several methods have been used to overcome the adverse events associated with the perineural catheter, including the subcutaneous tunneling, [12] application of adhesive glue, [13,14] and addition of adhesive anchoring devices such as wound closure strip (Steri-Strips; 3M Corporation, St. Paul, MN, USA), catheter-hub connections devices (eg, StatLock). [15,16] However, if the strength of the suture is too strong, there is a possibility that the catheter is clogged, and subcutaneous tunneling is a risk due to additional procedures.
The design of a conventional CTN assembly in which a flexible smaller diameter catheter is passed through a larger diameter needle may be prone to leakage and dislodgement at the catheter insertion site. On the other hand, the CON method formed a tighter seal at the needle insertion site. The diameter of the catheters used in the CON method is larger than that of the needle, sealing the catheter in place, and reducing the risk of leakage and dislodgement at the insertion site. According to the study results, the catheter used in the CON method had a holding force that was 6 times greater than the CTN method. [6] A greater holding force means that the catheters used in the CON method will not fall out as frequently, especially if the catheters are fixed only with dressing.
Another advantage is that the CON design allows the clinician to pull out the needle while simultaneously holding the catheter in place. The fact that the skin at the insertion site secures the catheter securely also allows the clinician to pull the needle out with 1 hand so that the catheter does not move back and forth to Table 4 The bolus dose was injected within 48 hours after surgery and the total consumed dose of local anesthetics from the portable electronic injection system are shown.
Characteristic
Group CTN (n = 32) Group CON (n = 33) P value The times a bolus dose has been injected (times) 8.0 ± 4. All measured values are presented as mean ± standard deviation. 0 h: immediately after admission to the PACU, CON = catheter-over-needle, CTN = catheter-through-needle, PACU = postanesthetic care unit. Table 3 Incidences of patients requiring additional analgesic within 48 hours after surgery.
Incidence
Group CTN (n = 32) Group CON (n = 33) P value the point where it enters the skin. There was also an advantage in reducing the procedure time, since no catheter fixation was required, and procedure time was significantly lower when employing the CON method. In our study, the procedure time was shorter in the CON method. In addition to the above advantage, it is considered that the CON method was able to reduce the procedure time because it did not need to go through an ultrasound verification process to confirm that it was properly mounted under the femoral nerve. On the other hand, the catheter was relatively thick and less flexible. If the skin area was not flat, the catheter was somewhat floating away from the skin. In these cases, there seems to be a risk of dislodgement. These characteristics of the CON catheter are considered to be suitable for continuous femoral nerve block. [17] The CON method was used to place the catheter tip under the femoral nerve. Such placement helped prevent perineural catheter tip dislocation and might have reduced the incidence of unintended nerve blocking caused by inappropriate catheter tip position. Thus, the incidence of adverse events can be expected to be lower using the CON method over the CTN method. However, there were no significant differences in the incidence of adverse events related to local analgesics.
Meanwhile, there were no differences between the 2 methods in VAS, the incidence of patients requiring additional analgesics, times of bolus dose injection, and total consumed dose of local anesthetics from the portable electronic injection system. The leakage at the catheter insertion site was relatively lower, and due to the larger catheter diameter, local anesthetics were thought to spread better and provide better pain control, but the actual results did not confirm this.
Our study has several limitations. First, the needles used when using the CTN method and the needles used when using the CON method were different. A 10-cm, 18-gauge needle with a Tuohytype tip was used in the CTN method, and a 5-cm, 21-gauge needle with a facet grinding-type tip was used in the CON method. The 2 needles differed by 5 cm in length, and the shapes of the needle tips were different. These differences might also have affected the procedure time. Second, although comparing the catheter-related adverse events requires that both methods proceed under the same conditions, it is difficult to set the same conditions. In our study, the CTN method resulted in leakage at the catheter insertion site in 34.4% of patients. Local anesthetics leaked out at the catheter insertion site so that the dressing area was wet, and it was necessary to attach the transparent securement dressing again. Third, because of the different shapes of the 2 types of catheters, it was impossible to blind the catheterrelated review during or immediately after the procedure. Instead, several assessments were performed in a blind manner, covering the area of the procedure with clothes such that neither patients nor researchers could know which catheter was used.
In conclusion, the CON method was able to shorten the procedure time while reducing the incidence of leakage at the catheter insertion site to the CTN method and showed similar effects in postoperative pain management. | 2021-07-01T06:16:41.758Z | 2021-07-02T00:00:00.000 | {
"year": 2021,
"sha1": "e682d6da3136b1a9429387758e89b84023c8690a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000026519",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05c4de082305ee4f0a707249fcc34769299c2501",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237405640 | pes2o/s2orc | v3-fos-license | Monocarboxylate Transporter 8 Deficiency: From Pathophysiological Understanding to Therapy Development
Genetic defects in the thyroid hormone transporter monocarboxylate transporter 8 (MCT8) result in MCT8 deficiency. This disorder is characterized by a combination of severe intellectual and motor disability, caused by decreased cerebral thyroid hormone signalling, and a chronic thyrotoxic state in peripheral tissues, caused by exposure to elevated serum T3 concentrations. In particular, MCT8 plays a crucial role in the transport of thyroid hormone across the blood-brain-barrier. The life expectancy of patients with MCT8 deficiency is strongly impaired. Absence of head control and being underweight at a young age, which are considered proxies of the severity of the neurocognitive and peripheral phenotype, respectively, are associated with higher mortality rate. The thyroid hormone analogue triiodothyroacetic acid is able to effectively and safely ameliorate the peripheral thyrotoxicosis; its effect on the neurocognitive phenotype is currently under investigation. Other possible therapies are at a pre-clinical stage. This review provides an overview of the current understanding of the physiological role of MCT8 and the pathophysiology, key clinical characteristics and developing treatment options for MCT8 deficiency.
INTRODUCTION
Throughout life, thyroid hormone plays an indispensable role in many processes in almost all tissues of the human body. During prenatal and early postnatal life, adequate thyroid hormone signalling is crucial for normal neurodevelopment (1). Furthermore, thyroid hormone regulates key metabolic processes (e.g. mitochondrial respiration) in various tissues, including the liver, kidneys and muscles (2,3).
Thyroid hormone is the common name for both the prohormone thyroxine (T4), the major product of the thyroid, and the biologically active triiodothyronine (T3). Intracellular thyroid hormone signalling is governed by three major processes: 1) transport of thyroid hormone across the cell membrane, facilitated by specific thyroid hormone transporter proteins, 2) conversion of T4 into T3 or the inactive metabolite reverse (r)T3 and further degradation into other inactive thyroid hormone metabolites by deiodinating enzymes types 1-3 (DIO1-3), and 3) genomic action of T3 upon binding to thyroid hormone receptor (TR) a and b (4). Together, these mechanisms allow for a precise and tissuespecific regulation of intracellular thyroid hormone signalling. This is pivotal for proper development and function of many tissues and crucial for the overall homeostasis of the hypothalamus-pituitary-thyroid (HPT) axis (2,3). Hence, alterations in any of these mechanisms can result in tissuespecific thyroid hormone signalling defects. In human, defects in all these mechanisms have been identified and such disorders generally impair cellular thyroid hormone signalling (5)(6)(7)(8)(9)(10)(11)(12).
To date, the most specific thyroid hormone transporter identified is monocarboxylate transporter (MCT) 8, encoded by SLC16A2 on chromosome Xq13.2 (13). Pathogenic variants in this gene result in a clinical syndrome of severe intellectual and motor disability and increased serum T3 concentrations, leading to thyrotoxic symptoms in peripheral tissues, together known as MCT8 deficiency [also known as Allan-Herndon-Dudley Syndrome (AHDS); OMIM number 300523] (8,9,14). Patients generally have poor head control, remain non-verbal, are wheelchair bound and severely underweight and often die at a young age (15). Since the identification of the first series of patients with MCT8 deficiency, many efforts have been undertaken to better understand this rare disorder and to develop potential therapeutic strategies ( Figure 1). However, important pathophysiological questions remain unanswered to date and treatment options are limited.
PHYSIOLOGICAL ROLE OF MCT8
The SLC16A2 gene comprises two transcriptional start sites, resulting either in an MCT8 protein of 613 amino acids (referred to as the 'long' isoform) or 539 amino acids (referred to as the 'short' isoform). Although the functional properties of both isoforms are highly similar, the short isoform is generally considered physiologically relevant, since this is the only isoform identified in human tissue to date. In vitro overexpression studies have demonstrated a role for the extended N-terminus, specific to the long MCT8 isoform, in ubiquitin-dependent proteasomal degradation, thereby potentially regulating expression of MCT8 protein (16,17). Yet, there is no definitive proof for the existence of a long MCT8 protein isoform under physiological conditions, which led to the recent change of the reference sequence from the long isoform (NM_006517.3; NP_006508.1) to the short isoform (NM_006517.5; NP_006508.2). It should be emphasized that most literature on MCT8 available to date used the long translational isoform to assign the position of variants. To avoid confusion amongst researchers and clinicians, we recently proposed to continue using the long isoform of MCT8 (counting from the first translational start site) in the nomenclature of SLC16A2 variants, according to the majority of variants described in literature (18,19).
MCT8 is capable of mediating the flux of thyroid hormone across the cell membrane through facilitated diffusion, independent of pH or a Na + gradient (18). Major substrates of MCT8 are the iodothyronines T3 and T4; to a lesser extent, it is also capable to mediate transport of the inactive metabolites rT3 and 3,3diiodothyronine (3,3'-T2) (20). Both uptake and efflux of thyroid hormone across the cell membrane are facilitated by MCT8 (21). Interestingly, transport of thyroid hormone metabolites lacking the aNH2 group [e.g. 3,3',5-triiodothyroacetic acid (Triac) and 3,3′,5,5′-tetraiodothyroacetic acid (Tetrac)] is not dependent on MCT8 (22,23). Silychristin, a flavonolignan found in some traditional European and Asian medicinal compounds, is capable of specifically and effectively blocking MCT8-mediated thyroid hormone transport in different in vitro and ex vivo models (24,25). Based on studies using the inhibitors bromsulphthalein (BSP) and desipramine, Jomura et al. suggest that MCT8 can be involved in efflux of the anti-epileptic drug phenytoin across the blood-brain barrier (BBB) (26). However, as these inhibitors are not MCT8-specific and with BSP being an inhibitor of the majority of thyroid hormone transporting organic anion transporter proteins (OATPs) (18), additional evidence is warranted to support this hypothesis.
MCT8 is ubiquitously expressed throughout the human body. Both MCT8 mRNA and protein are most prominently expressed in the liver, but are also found in significant quantities in the thyroid, kidneys, pituitary and brain (27). In particular, MCT8 is expressed in different neural cells, including subtypes of neurons, astrocytes, oligodendrocytes and tanycytes (28,29). Expression of MCT8 in tanycytes may potentially play an important role in the negative feedback of HPT axis homeostasis (30). During all stages of intra-uterine development, MCT8 expression is observed in vascular structures within the foetal brain and, later in development, in their surrounding astrocytes (31). Moreover, MCT8 expression is observed in radial glial cells, leptomeningeal cells and blood vessels in the subarachnoid space of murine models (at 14 to 38 gestational weeks) (31). These findings are indicative of a prominent role for MCT8 at the BBB and, to a lesser extent, also at the blood-cerebrospinal fluid barrier (BCSFB), and underlie the current paradigm that MCT8 is crucial for thyroid hormone transport across the BBB in particular. Interestingly, recent detailed protein expression studies from Wilpert et al. on murine brain tissues demonstrated strong expression of MCT8 in the brain barriers and many subpopulations of neurons (including cortical and cerebellar neurons) at a young age (postnatal day 6, representative for the foetal phase in human), but a sharp decrease in MCT8 expression in neurons upon aging, whereas expression in the BBB and BCSFB did not change upon aging. In adult postmortem and fresh human brain tissues from 4 older individuals (50 -82 years), a similar pattern of strong MCT8 expression in endothelial cells and minimal to no expression in neuronal tissues was found (32). Together, these findings indicate the presence of MCT8 in the majority of brain tissues during (prenatal) development, whereas its presence may be restricted to the brain barriers later in adult life. Yet, it is hitherto unknown how MCT8 functionally contributes to intracellular thyroid hormone homeostasis in different cell types at different developmental stages.
Despite its ubiquitous expression in human tissues other than brain (so called peripheral tissues), the physiological role of MCT8 in these tissues is less well-defined. Importantly, along with MCT8, various alternative thyroid hormone transporters are ubiquitously expressed throughout the peripheral tissues (members of the L-type amino acid transporter and OATP family, as well as Na+-taurocholate cotransporting polypeptide and the recently identified thyroid hormone transporter SLC17A4) (18). Hence, other peripheral tissues have a variable dependence on MCT8 for adequate thyroid hormone homeostasis (33,34).
MCT8 is highly expressed on the basolateral side of follicular epithelial cells of the murine thyroid, as demonstrated by different in situ hybridization, immunohistochemistry and immunoblot analyses (35,36). Similar observations were made in the human thyroid (37).
PATHOPHYSIOLOGY OF MCT8 DEFICIENCY
The relevance of adequate MCT8-mediated thyroid hormone transport becomes apparent in patients harbouring mutations in the SLC16A2 gene. Such mutations result in a phenotype of severe intellectual and motor disability and signs of peripheral thyrotoxicosis, including negative clinical sequelae as tachycardia and being severely underweight. As SLC16A2 is located on the X chromosome, mostly men are affected; one female with MCT8 deficiency due to unfavourable X-inactivation has been described (38). A detailed description of the clinical characteristics of this disease is provided in section Disease Characteristics of this review.
Different animal models have been exploited to better understand the pathophysiology of MCT8 deficiency. The first animal model generated for these purposes was the Mct8 knockout (KO) mouse (39,40). In mice, which are commonly studied to delineate tissue-specific intracellular thyroid hormone signalling processes, the expression pattern of MCT8 is largely comparable to the human situation (18). However, in contrast to the human BBB, the BBB of mice (and other rodents) coexpresses the T4-specific transporter Oatp1c1, enabling T4 transport into the brain and subsequent local conversion into T3 by Dio2 (41). This observation is highly relevant in studying cerebral thyroid hormone homeostasis, as murine models do not well resemble the human physiological situation. Although Mct8 KO mice well resemble the serum thyroid hormone fingerprint of patients with MCT8 deficiency, no overt neurological abnormalities were identified. Therefore, results obtained in rodent brains should be translated with caution to the human situation (42). Hence, Mct8 KO mice are generally considered to be an adequate model for the peripheral thyrotoxic state observed in patients with MCT8 deficiency, but should not be used when evaluating (treatment effects on) the neurocognitive phenotype (42). Subsequently, an Mct8/Oatp1c1 double KO (DKO) mouse model was generated, which indeed mimicked both the peripheral thyrotoxic as well as the neurocognitive phenotype of MCT8 deficiency. The latter was illustrated by pronounced locomotor abnormalities, altered Purkinje cell dendritogenesis, a reduction in cerebral T3 and T4 content to 10%, decreased cerebral expression of the thyroid hormone sensitive gene Hairless and reduced cerebral levels of myelin basic protein (MBP), indicating hypomyelination (41). More recent studies suggest that Mct8/Dio2 DKO mice may also mimic human MCT8 deficiency, as inactivation of Dio2 prevents the formation of sufficient intracerebral T3 concentrations even in presence of functional Oatp1c1 (43). In addition to murine models, zebrafish and chicken models are available and resemble parts of the phenotypic spectrum of MCT8 deficiency (44,45). The functional contribution of MCT8 in the BBB only became recently apparent when Vatine et al. successfully obtained MCT8 deficient vascular endothelial and neural cells from patient-derived induced pluripotent stem cells (iPSCs) (46). In these MCT8 deficient neural cells, reduced thyroid hormone uptake capacity was found. However, the ability of these MCT8 deficient iPSCs to differentiate to neural cells was grossly unaltered. This latter observation suggests that, despite the absence of functional MCT8, sufficient intracellular T3 concentrations are reached in these cells. Utilizing iPSCs to model the BBB demonstrated the relevance of MCT8 in this barrier, with transport of T3 across the modelled BBB being significantly reduced in absence of MCT8 (46). Defective MCT8 in specific neuronal populations or in neural stem cells from patient derived iPSCs was not studied. Complementary studies of Mayerl et al. on adult hippocampal neurogenesis in murine neural stem cells showed that this process is largely regulated by thyroid hormone signalling in a cellautonomous fashion, with MCT8 functioning as an important gate keeper (47). Upon both global deletion and adult neural stem cell-specific deletion of MCT8, differentiation of hippocampal neuroblasts was reduced. Similarly, recent studies of Vatine et al. demonstrated that oligodendrocyte precursor cells (OPCs) are not able to generate myelinating oligodendrocytes in the context of reduced cerebral thyroid hormone signalling, as observed in MCT8 deficiency (29). It should be noted that these studies used different models representing different neural cell types and different stages of neuronal development, limiting direct comparison of their results. Loṕez-Espıńdola et al. performed post-mortem studies on the cerebral cortex and cerebellum of a human foetus with MCT8 deficiency (30 gestation weeks) and an 11-year old patient with MCT8 deficiency (48). The foetal brain already showed delayed cortical and cerebellar development (including altered Purkinje cell dendritogenesis), hypomyelination (indicated by low levels of MBP) and impaired maturation of the axons, indicating that the brain suffers from a severe state of low thyroid hormone signalling during pregnancy. Similar features were identified in the brain of the 11-year old patient, thus excluding spontaneous recovery of the neural aberrations with increasing age. Cerebral cortex T3 and T4 concentrations were reduced by 50%. The observed features were in line with the features found in Mct8/ Oatp1c1 DKO mice (41). Interestingly, the reduction in cerebral thyroid hormone content was less prominent when compared to Mct8/Oatp1c1 DKO mice. This remarkable difference is currently not well understood; amongst other factors, differential expression of so far unidentified thyroid hormone transporters or other protecting mechanisms could underlie this observation. Together with the studies of Wilpert et al., these studies point towards a prominent role of MCT8 in the transport of thyroid hormone across the brain barriers, and the role of MCT8 in neural cells appears to vary between cell types and developmental stages.
Over the years, multiple hypotheses have been postulated on the pathophysiology of the distinct thyroid hormone fingerprint in MCT8 deficiency. It was first reasoned that, due to reduced cerebral T3 uptake, a progressive buildup of T3 in serum stimulates DIO1 activity in peripheral tissues, thus further aggravating the peripheral thyrotoxicosis by enhancing T4 to T3 conversion. In Mct8/Pax8 DKO and Pax8 KO mice, both being completely athyroid, injection of T3 resulted in similar serum T3 concentrations in both groups (35). In contrast, injection of T4 resulted in lower serum T4 concentrations in Mct8/Pax8 DKO mice compared to Pax8 KO mice, indicating increased deiodinase activity independent of the circulating T3 concentrations. Hence, a reduced cerebral thyroid hormone uptake does not explain the characteristic high T3:T4 ratio observed in MCT8 deficiency. As MCT8 is expressed in human tanycytes and thyrotropin releasing hormone (TRH)expressing neurons (28,49), it was hypothesized that MCT8 may have an important role in controlling the HPT axis, and in particular the negative feedback system. Moreover, the coexpression of MCT8 and DIO2 in folliculostellate cells of the human pituitary indicates that MCT8 might also control the HPT axis on pituitary level, with folliculostellate cells acting in a paracrine fashion (49). In Mct8 KO mice the thyroidal secretion of T4 is decreased, whereas increased release of T3 is observed (35,36). It should be noted that the ratio of thyroidal T3 vs T4 secretion and intrathyroidal deiodinase activity are different between rodents and humans (50). Taken together, it is likely that MCT8 plays a role in thyroid hormone homeostasis at all levels of the HPT axis ( Figure 2). This, however, does not completely explain the high serum T3:T4 ratio that is observed in all patients with MCT8 deficiency (see Disease Characteristics), as the increased peripheral thyroid hormone metabolism remains not well understood in this hypothesis.
A third hypothesis focusses on deiodinase activity in peripheral tissues. Dio1 has a role in rT3 clearance and converts the prohormone T4 into biologically active T3. Dio2 has an activating function by catalysing T4 to T3 conversion, whereas Dio3 primarily has a role in inactivation of thyroid hormone (conversion of T4 to rT3 and T3 to T2) (4). In the liver and kidneys of Mct8 KO mice, strongly increased activity of Dio1 was observed, whereas increased Dio2 activity was found in the brain, pituitary and skeletal muscles (39,40). In these mice, increased liver T3 content and increased expression of Dio1 were observed, and resulted in clinical features of increased thyroid hormone signalling in the liver (e.g. decreased serum cholesterol concentrations), in line with observations made in patients with MCT8 deficiency (15,39 (51). Interestingly, proximal tubule cells that normally express MCT8, showed increased Dio1 expression in the absence of MCT8. Moreover, upon peripheral injection of radiolabelled T3 and T4 in Mct8 KO mice, increased accumulation of radioactivity was observed in kidney, but not in liver homogenates (40). Also, liver-specific KO of Dio1 in Mct8 KO mice did not result in normalization of the circulating thyroid hormone levels, suggesting that the increased hepatic Dio1 expression is not causing the abnormal thyroid hormone levels (52). Together, these findings hint towards a crucial role for MCT8 in renal T4 efflux, thereby maintaining global thyroid hormone homeostasis (Figure 2). Based on these findings, it is currently hypothesized that, due to deficient MCT8-mediated
DISEASE CHARACTERISTICS
In 1944, Allan, Herndon and Dudley first described families with several members suffering from an X-linked form of neurodevelopmental delay, resulting in a lack of speech and impaired walking ability. Hence, it was coined Allan-Herndon-Dudley Syndrome or AHDS (53). The underlying pathogenic mechanism of AHDS remained unclear up until 2004. Soon after the identification of MCT8 as a specific thyroid hormone transporter (13), the first patients with a mutation therein were identified, presenting a remarkably similar clinical phenotype as those males described by Allan, Herndon and Dudley (8,9). In 2005, Schwartz et al. provided definite proof that the families originally described by Allan, Herndon and Dudley indeed carried mutations in MCT8 (14). Thus, the terms MCT8 deficiency and AHDS both comprise the same syndrome and are both commonly used in the field. With the current tendency to avoid eponyms, MCT8 deficiency is currently preferred to describe the disease. Initial reports mainly focused on the neurodevelopmental phenotype, resulting from defective thyroid hormone transport into the brain. Over time, it was increasingly recognized that, in contrast to the brain, other tissues of the human body (hereafter called peripheral tissues) are in a chronic thyrotoxic state. Due to the abundance of alternative thyroid hormone transporters in peripheral tissues, these tissues adequately sense the high circulating T3 concentrations.
When studying the genetic basis of MCT8 deficiency, it is critical to discriminate deleterious mutations from benign (rare) variants (54). Approximately 150 different mutations in about 250 different families have been described in literature and a prevalence of 1:70,000 males has been suggested (15,55). They can be classified in three major groups: large deletions resulting in an incomplete MCT8 protein, insertions/deletions/nonsense mutations resulting in a frameshift or premature truncation, and missense mutations resulting in a single amino acid change (18). Whereas the pathogenic character of the first two categories can be predicted with considerable certainty, not all missense variants impair MCT8 thyroid hormone transport function. Evaluation of residual thyroid hormone transport capacity of these variants in in vitro systems or patient-derived fibroblasts is therefore indispensable to confirm the pathogenic nature of identified variants (56)(57)(58). Recent studies suggested that Cterminal missense variants beyond amino acid residue Met574 (long isoform) are well-tolerated and likely to not results in a phenotype (59). Additional studies are warranted to further delineate the relationship between the different mutations and the phenotypic variability.
In the years following its discovery, understanding of the phenotype of MCT8 deficiency largely relied on reports of individual cases and studies including small cohort of patients with MCT8 deficiency. With the number of identified patients strongly increasing with rising awareness amongst physicians in recent years, the opportunity arose to study larger cohorts of patients (15,60). Following structured evaluation of two cohorts, better understanding of different aspects of MCT8 deficiency was obtained. The first cohort, described by Remerand et al., comprised of 24 patients in whom specifically the neurological features were further detailed (60). In line with previously reports, they showed that the neurocognitive phenotype of MCT8 deficiency comprises severe intellectual and motor disability. Hypotonia, spasticity and dystonia are key clinical features (observed in 100%, 71% and 75%, respectively). The majority of patients had poor head control, did not develop speech and did not attain the ability to sit or walk. Magnetic resonance imaging (MRI) depicted hypomyelination in the majority of patients (19 out of 24 patients) and global brain atrophy in approximately 50% of patients. Similar observations were made earlier, in 13 MRIscans of 6 patients (61). Moreover, detailed diffusion tensor imaging, available in three subjects, demonstrated a lack of definition in the anteroposteriorly directed association tracts, whereas the commissural white matter tracts (indicating the corpus callosum) and the corticospinal tracts appeared relatively normal. A recent review of the available literature suggested that MRI-scans of patients show gradual improvement of myelination throughout life, indicating a state of delayed myelination rather than hypomyelination (62). In contrast, post-mortem microscopic evaluation of the brain of an 11-year old patient demonstrated deficient myelination [also see Pathophysiology of MCT8 Deficiency (48)]. Strikingly, in the cohort of Remerand and colleagues, 8 out of 24 (33%) patients were able to walk, indicative of a less severe phenotype. This proportion was considerably higher than estimates based on available case reports and case series (18,61), suggesting that patients with a relatively less severe phenotype were overrepresented in this cohort. Although Remerand et al. reported on thyroid function tests, the peripheral phenotype had not been further detailed.
Hence, the establishment of a larger cohort was warranted, in order to obtain more robust phenotypic data covering both the neurological and peripheral phenotype. In an international multicenter effort, Groeneweg et al., described the phenotypic characteristics of up to 151 patients with MCT8 deficiency (15). Similar observations of key clinical neurological symptoms were found as in the cohort of Remerand et al., albeit with a much lower proportion of patient who had developed walking abilities (4 out of 77 patients). Also, this study was the first to systematically collect quantitative data on the motor and cognitive abilities of patients with MCT8 deficiency, as assessed with the Gross Motor Function Measure-88 and Bayley Scales of Infant Development III. These abilities plateaued at a median developmental age of well below 12 months, whereas the median age of evaluation was 6.4 years.
Next to neurodevelopmental symptoms, this study was the first to provide large-scale data on the peripheral phenotype (15). Observed hallmark features were elevated serum T3 concentrations (present in 95% of patients), hypothyroxinaemia (present in 89% and 90% of patients for serum free T4 and total T4 concentrations, respectively), with serum thyroid stimulating hormone (TSH) concentrations within the age-specific reference range in the majority (89%) of patients. Serum rT3 concentrations were low in 91% of patients, resulting in a high T3:rT3 ratio in all patients. No difference in serum T3 concentrations was observed in patients with severe vs less severe phenotype. In contrast, patients with a less severe phenotype showed significantly higher total T4 concentrations when compared to patients with a severe phenotype (although still below the reference interval). As a consequence, the T3:T4 ratio was lower in patients with a less severe phenotype. Importantly, of eight patients with T4-based neonatal screening data available, the majority of patients (88%) had total T4 concentrations below the 20 th percentile ( Figure 3A). None were identified through neonatal screening. Also, none would be identified as abnormal in TSH-based screening programs, which are commonly utilized ( Figure 3B). Recent studies demonstrated that rT3 concentrations and T3/rT3 ratio might be suitable biomarkers in neonatal screening (63). As pregnancy and delivery are unremarkable in most cases (15) and treatment early in life can potentially improve the neurocognitive phenotype, early identification of patients with MCT8 deficiency through neonatal screening could be of large additive value.
As a consequence of the elevated serum T3 concentrations, clinical parameters and biochemical markers reflecting thyroid hormone action in peripheral tissues were altered. Being underweight was detected in 71% of patients and 84% of patients had hypotrophic musculature. Recurrent pulmonary infections were common, which might be related to the high incidence of impaired swallowing function (observed in 71% of patients) and increased susceptibility as a result of being underweight. Moreover, signs of cardiovascular dysfunction were commonly observed, including premature atrial contractions (PACs), resting tachycardia and elevated systolic blood pressure (in 76%, 31% and 53% of patients, respectively). Serum sex hormone-binding globulin (SHBG) concentrations, considered to be a marker of thyroid hormone signalling in the liver, were elevated compared to its age-specific reference intervals in 88% of patients. Whereas life span had previously been reported to appear relatively normal (14), systematic large-scale phenotyping showed a strongly decreased life expectancy in patients with MCT8 deficiency, with a median survival of 35 years (15). Proxies of the severity of both the peripheral and neurocognitive phenotype (being underweight early in life (1-3 years of age) and absence of head control before the age of 1.5 years, respectively) were strongly associated to increased mortality risk at a young age, with approximately 50% of severely affected patients dying in childhood. Importantly, sudden death was reported as a common cause of death. The observed high prevalence of cardiovascular abnormalities may point to a cardiac origin. Key characteristics such as cardiovascular function and nutritional status had been rarely documented in literature. These characteristics are potential key determinants of early death. Such detailed quantitative natural history data may serve as a control cohort in the evaluation of future therapeutic interventions, with placebo-controlled studies deemed not feasible due to the rarity of MCT8 deficiency.
DEVELOPMENT OF (POTENTIAL) THERAPIES
Currently, no therapeutic strategies are registered for MCT8 deficiency. The severity of this syndrome and its impact on quality of life of patients and their relatives, and the high mortality rate in infancy, warrant the need for adequate therapies. Since the description of the first patients with MCT8 deficiency, major research initiatives have been taken to design and develop therapies. This paragraph discusses the different treatment strategies that have been evaluated over time.
The ideal therapy should ameliorate or prevent the neurocognitive phenotype and should alleviate the peripheral thyrotoxicosis. It is generally accepted that putative beneficial effects on the neurocognitive phenotype are most prominent when treatment is commenced early in life, and are likely limited in older patients (analogous to congenital hypothyroidism). With symptoms of the peripheral phenotype linked to increased mortality risk at young age (15), therapies that can effectively and safely modulate the peripheral thyrotoxicosis should also be considered relevant, irrespective of their effects on neurocognitive outcomes.
In line with this increase in serum T3 concentrations, body weight and other markers of thyrotoxicosis even further deteriorated. Hence, treatment with T4 monotherapy appears to aggravate the peripheral thyrotoxic phenotype. As T4 is not able to enter the human brain without functional MCT8, no improvement of neurocognitive symptoms was observed. Therefore, it is generally accepted that postnatal treatment with T4 monotherapy is not recommended in MCT8 deficiency. Following these observations, it was hypothesized that by combining T4 with propylthiouracil (PTU), which inhibits DIO1 in peripheral tissues, the aggravation of the thyrotoxicosis could be overcome. This strategy has been reported in five patients [ (55,(83)(84)(85)(86), reviewed in (18)] and resulted in a decrease in serum T3 concentration and improvement of biochemical markers and clinical symptoms of the peripheral thyrotoxicosis (decrease in serum SHBG concentrations, increase in body weight, decrease in heart rate). Following the experiences with T4 monotherapy, no alterations in the neurocognitive phenotype were observed upon treatment. Interestingly, neurocognitive improvement was observed upon intra-uterine instillation of T4 in one mother pregnant of a foetus with MCT8 deficiency, followed by T4 + PTU combination therapy after birth (87). It remains unclear why prenatal treatment had a beneficial effect on neurocognition, whereas this is not the case for postnatal treatment (9,78). This can possibly be explained by the timing of the intervention or differential expression of alternative thyroid hormone transporters in the prenatal brain. However, as PTU carries a high risk of severe adverse reactions (e.g. agranulocytosis and liver failure), the pros and cons of treatment with a combination of T4 and PTU should be carefully balanced, in particular in vulnerable patients such as patients with MCT8 deficiency (88). Recent studies found that intranasal thyroid hormone administration, theoretically bypassing the BBB, did not result in alterations in cerebral thyroid hormone content, but rather aggravated the peripheral thyrotoxicosis in Mct8 KO mice (89). Hence, following the conclusion of the authors, intranasal thyroid hormone administration is likely not a valid treatment option in patients with MCT8 deficiency. The efficacy of this therapy has not been formally tested in patients.
With classic (anti-)thyroid drugs not being effective in MCT8 deficiency, alternative treatment options were explored. Research initiatives have been focussing on compounds called thyroid hormone analogues, which are able to cross the cell membrane independent of MCT8, with intracellular effects and degradation similar to T3 (90). In Mct8 KO mice, the thyroid hormone analogue diiodothyropropionic acid (DITPA) was able to effectively normalize serum TSH concentrations and Dio1 expression in the liver, indicative of a decrease of the peripheral thyrotoxic state (91,92). However, as Mct8 KO mice do not fully resemble the human cerebral pathophysiology of MCT8 deficiency, these studies were less suited to address the effect of DITPA on the neurocognitive phenotype. Following these preclinical studies, four patients (median age 25 months, range 8.5 -25 months) were treated with DITPA on compassionate use basis for a median time of 38.5 months (range 26 -40 months) (85). Upon treatment, serum T3 concentrations normalized and subsequent improvement of some markers of peripheral thyrotoxicosis, including body weight, was observed in some of the subjects. However, DITPA treatment did not result in improvements of the neurocognitive phenotypes in these patients, despite their relatively young age. After these reports, additional work showed that DITPA is able to cross the murine placenta independent of Mct8 (93). If similar observations are made for the human situation, this would classify DITPA as a potentially suitable compound for prenatal treatment of foetuses with MCT8 deficiency; this, however, remains to be elucidated. In zebrafish larvae lacking Mct8, DITPA was able to partially restore myelination six days after fertilization (94). Moreover, Lee et al. showed that DITPA restores myelination in oligodendrocytes derived from human embryonic stem cells, in which MCT8 was knocked down using short hairpin (sh)RNA technique (95). It should be noted that, despite its beneficial effects on the peripheral phenotype, DITPA is currently not commercially available. Alongside DITPA, the therapeutic potential of the thyroid hormone analogue triiodothyroacetic acid (Triac) for MCT8 deficiency has extensively been studied. Triac is a naturally occurring thyroid hormone metabolite, which potently binds and activates the TRs and enters the cell independent of MCT8 (22). Following these observations, the efficacy of Triac was evaluated in Mct8/Oatp1c1 DKO mice. Importantly, a clear improvement in brain development was observed when mice were treated directly after birth (from postnatal day 1 to 12; dose 50 ng/g body weight), with complete restoration when high doses of Triac (400 ng/g body weight) were administered, as illustrated by normalization of cerebellar Purkinje cell dendritogenesis (calbindin immunoreactivity) and myelination (MBP immunoreactivity) (22). In line with these findings, Triac was able to completely rescue myelination in mct8 -/zebrafish larvae six days after fertilization, when brain damage has already occurred (94). An international phase II trial including 46 patients with a median age of 7.1 years demonstrated that Triac effectively reduces the high serum T3 concentrations and improved subsequent clinical and biochemical features of thyrotoxicosis, including body weight, heart rate, occurrence of PACs, SHBG and creatinine. In a subset of patients, treatment was extended up until 3 years and resulted in sustained beneficial effects on the peripheral phenotype. Except transient signs of increased thyrotoxicosis in a small subset of patients, no (severe) adverse events related to Triac were observed. Furthermore, a trend of improvement in neurodevelopment was observed in patients treated early in life (younger than four years at baseline) (96). Previous studies have demonstrated transplacental passage of Triac in women, also making Triac a suitable compound for prenatal treatment of foetuses with MCT8 deficiency (97).
In pursuit of the identification of thyromimetic molecules with a larger bioavailability in the brain, the thyroid hormone analogue sobetirome and its prodrug sob-AM2 have been studied in Mct8/Dio2 DKO mice (98). Although upon treatment serum T3 concentrations decreased, the effects of sobetirome and sob-AM2 on the peripheral phenotype and its safety profile remain unclear. In particular, upon sobetirome and sob-AM2 treatment, expression of T3-responsive genes did not alter in the liver and were increased in the heart, suggesting that these compounds may aggravate the peripheral thyrotoxic state. Cerebral Hairless expression levels were restored, indicative of a beneficial effect on cerebral thyroid hormone signalling. Additional evaluation of efficacy and safety is warranted before this drug can be used in a clinical setting.
Next to recovering thyroid hormone signalling via thyroid hormone analogues that enter the cell independent of MCT8, attempts to recover MCT8 function have been made. Gene therapy using adeno-associated virus 9 (AAV9) vector containing human MCT8 cDNA was able to increase cerebral thyroid hormone signalling in Mct8 KO mice (99). These effects were only observed after intravenous administration, but not after intracerebroventricular administration, which underlines the relevance of MCT8 at the BBB. Using a similar approach, Zada et al. showed that upregulation of the fusion protein Mct8-tagRFP completely rescued hypomyelination in mct8 -/zebrafish larvae (94). The effects of these different forms of gene therapy on the peripheral thyrotoxic phenotype as well as their safety profiles are to be explored. Hence, clinical evaluation of these promising therapies is not yet applicable.
In addition, different chemical chaperones have been exploited to improve trafficking and stability of mutant MCT8, thus potentially increasing MCT8-mediated thyroid hormone transport across the cell membrane. Depending on the cell model, different effects have been found. In overexpression models of the p.F501del mutation, treatment with phenylbutyrate and dimethylsulfoxide enhanced residual MCT8 transport capacity (100,101). In fibroblasts containing this mutation, considered the most representative ex vivo model, these effects were not observed (25). The effects of such molecules on the thyrotoxic and neurocognitive phenotype are unclear, as none of these compounds have been tested in animal models for MCT8 deficiency. As this potential therapy is only applicable for a subset of mutations, its role in clinical praxis will likely be limited.
CONCLUSIONS AND FUTURE PERSPECTIVES
Since the first recognition of MCT8 deficiency, major discoveries have been made, helping to better understand this rare disorder. By utilizing different in vitro, ex vivo and in vivo models, it became clear that MCT8 particularly plays a crucial role at the BBB. Additional studies are warranted to fully understand the role of MCT8 within other parts of the brain. Results from recent deep-phenotyping re-emphasized the relevance of the peripheral phenotype in patients with MCT8 deficiency, as being underweight early in life as a proxy of the peripheral thyrotoxicosis, has been linked to increased mortality risk at a young age. These findings stress the importance of the thyrotoxic features as therapeutic target and thus the availability of treatment options that modulate the peripheral phenotype. The thyroid hormone analogue Triac, the only available safe treatment at this moment, is able to effectively and safely reduce the peripheral thyrotoxicosis in patients and might improve the neurocognitive phenotype when treatment is initiated early in life. Currently, a second international phase IIb trial studies the effect of Triac on neurocognitive outcomes in young patients (<30 months at baseline; NCT02396459). Another research initiative aims to treat women pregnant of a foetus with MCT8 deficiency with intra-uterine instillation of DITPA on compassionate use basis, in order to evaluate its effects on brain development when commenced early in life (NCT04143295). Aiming to gain a deeper understanding of the needs of patients and their parents and physicians, an international registry linked to the European Reference Network on Rare Endocrine Conditions centralizes knowledge on MCT8 deficiency (https://mct8registry.erasmusmc. nl/en/index.html). Together with other studies, these initiatives aim to provide additional insights in MCT8 deficiency and to decipher whether thyroid hormone analogues are able to modulate the neurocognitive phenotype when treatment is initiated early in life. With the availability of promising (prenatal) therapy, efforts should be made to detect MCT8 deficiency as early in life as possible. This goal could potentially be achieved by redefining neonatal screening programs. Given the rarity and complexity of MCT8 deficiency, international collaboration amongst researchers is warranted to attain these goals.
AUTHOR CONTRIBUTIONS
FSvG, NG, SG, and WEV reviewed available literature and wrote the manuscript. All authors contributed to the article and approved the submitted version. | 2021-09-04T13:29:32.523Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "f8d59db9c1a36adfb1590443fea2f84820ea071c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.723750/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8d59db9c1a36adfb1590443fea2f84820ea071c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
120455174 | pes2o/s2orc | v3-fos-license | Jackknife Estimator of Species Richness with S-PLUS
An estimate of the number of species, S , usually called species richness by ecologists, in an area is one of the basic statistics used to ascertain biological diversity. Traditionally ecologists have used the number of species observed in a sample, S 0 , to estimate S , realizing that S 0 is a lower bound for S . One alternative to S 0 is to use a nonparametric procedure such as jackknife resampling. For species richness, a closed form of the jackknife estimator is available. Typically statistical software contains only the traditional iterative form of the jackknife estimator. The purpose of this article is to propose an S-PLUS function for calculating the noniterative first order jackknife estimator of species richness and some associated plots and statistics.
Introduction
Estimating the true number of species in an area, S, usually called species richness by ecologists, is one of the basic statistics used to ascertain biological diversity. To estimate species richness one would naturally consider the observed count of species, S 0 , from a given sample. However, it is clear that S 0 is a lower bound for the true number of species. For S 0 to accurately estimate S the researcher must actually observe every species. If the researcher can only sample a few plots from the area, then S 0 is likely to be smaller than S. Even if a census of the area is done it is likely that some species will be missed because of human error, environmental fluctuations that effect observations, or very small species detection probabilities.
Jackknife estimation
In the late 1970s statisticians and ecologists began to avidly look for alternative procedures for estimating S. The estimators considered included frequentist, Bayesian and nonparametric philosophies, and sampling from finite and infinite populations (Mingoti and Meeden 1992;Bunge and Fitzpatrick 1993).
One alternative, presented by Smith and van Belle (1984), to using S 0 as an estimator of species richness is to use a nonparametric procedure such as jackknife resampling. The jackknife is useful because it is known to reduce bias and, for estimates of species richness, it has a closed form. Another useful characteristic of the jackknife estimator of species richness is that the estimator is based on the presence or absence of a species in a given plot rather than on the abundance of the species. To use the jackknife estimator for species richness, data must be collected at n locations (e.g., plots) in the designated area for which S is to be estimated.
The basic idea behind the first order jackknife estimator of S is to base it on the amount of unique species information that is contained in each observation. Following Smith and van Belle (1984) 1. Remove one of the observations, say, i, where i ∈ {1, 2, ..., n} denotes the labels of the sample units.
2. Compute an estimate of S,Ŝ −i , on all observations excluding i.
5. The first-order jackknife estimator of S is Note that in step 1 two observations could be removed, and in fact, as many as n − 1 observations could be removed to obtain higher order jackknife estimators (Smith and van Belle 1984).
A closed form solution to the jackknife algorithm is available. Here the jackknife estimator depends on the number of unique species in the removed observations (e.g. plots). The closed form of the first order jackknife estimator of species richness, as given by Smith and van Belle (1984), is where S 0 is the observed species count over all plots, r i is the number of species that are found only in plot i, and n is the number of plots. Note that when all species are observed on at least two plots, J n (S) = S 0 because r i = 0 for all i = 1, 2, ..., n. When there is more variability between observations the r i 's and J n (S) become larger.
An estimator of the variance of J n (S) is given by This is a measure of the average deviation of the r i 's from the observed mean of the r i 's. Our S-PLUS function reports the standard error of J n (S), VAR[J n (S)].
Performance of the jackknife estimator
A few researchers have evaluated the performance of J n (S) including Smith and van Belle (1984), Palmer (1990), and Hellmann and Fowler (1999). Smith and van Belle (1984) evaluated J n (S) under the assumption that the abundance of a given species has a Poisson distribution. They showed that J n (S) is less biased than S 0 and that the expected bias approaches zero as the species density (number of species per plot) increases. Palmer (1990) evaluated J n (S) based on samples taken from hardwood stands in North Carolina. A census was taken at 30 locations to obtain the "true" species richness, then samples were taken from plots on the 30 locations. Palmer used the mean deviation to show that J n (S) has less bias than S 0 , and used the mean squared deviation to show that J n (S) has less variability (more precision) than S 0 . Hellmann and Fowler (1999) considered the bias, precision, and accuracy of J n (S) based on samples from five different forested locations in Michigan. Each location contained 160 plots. The five locations had different types of tree growth and ranged from 5 total species to 25 total species. The 160 plots at each location were considered to be the population of plots, and samples of different sizes were taken from each population.
Hellmann and Fowler's results indicated that J n (S) is less biased than S 0 when less than 60% of the "population" is sampled and that J n (S) is typically less precise than S 0 but it is usually more accurate than S 0 . Note that Hellmann and Fowler measured precision by VAR[J n (S)] and measured accuracy by MSE[J n (S)]. They also pointed out that the characteristics of J n (S), as well as S 0 , depend on the sample size.
Note that Palmer (1990) had different results regarding precision than Hellmann and Fowler (1999). This may be because they were looking at different data sets. However, they are not exactly clear about their definitions so it is possible that they observed different results because they were looking at different characteristics or using different estimates of precision.
Algorithm for calculating the jackknife estimate
S-PLUS does contain an iterative jackknife procedure, but as mentioned previously, a closed form jackknife estimator exists for estimating S. The following outlines an algorithm for an S-PLUS function which calculates a noniterative first order jackknife estimate of species richness for each of several sampling periods (e.g., years). The difficult task is in identifying the number of unique species in each plot.
1. The data set should have the following headings: "Period" for identifying the sampling period (e.g. years), "Plot" for each unique sampling location, and "Species" for the actual species observed in each period on each plot.
2. Identify the number of periods and the number of plots in the data set for future use.
3. Create a matrix that contains the number of unique species on each plot for each year.
(a) Create a storage matrix with the number of rows equal to the total number of plots, and with three columns for sampling period, plot, and count.
(b) Identify the plots listed for each period, and compute the total number of species for each period.
(c) Identify the number of species not on plot i and subtract number of species not on plot i from the total number of species. This is the number of unique species on plot i, r i .
(d) Fill the storage matrix with the information obtained in steps (b) and (c) and assign header names: "Period", "Plot", "Count".
4. Create a matrix of jackknife estimates.
(a) Create a storage matrix with the number of rows equal to the number of sampling periods in the data set, and with columns for sampling period, the observed count, the jackknife estimate, the standard error of the jackknife estimate, and the number of plots for the given period.
(b) Identify the plots listed for each year and compute the total number of species for each sampling period.
for each period from the data set created in step 3.
(d) Multiply by the appropriate constants to get the estimate of the first order jackknife and the estimate of its variance.
S-PLUS functions
The basic structure of our S-PLUS function follows the previously stated algorithm with some additional plots and statistics. This function also calculates the 95% standard error of the first order jackknife estimate and calculates a standard normal confidence interval. Note that the use of a normal confidence interval is appropriate if the number of plots is large, say, n ≥ 30. Our function produces a table identifying the number of plots in which unique species occur for each sampling period, a set of notched box plots (McGill, Tukey, and Larsen 1978) of the number of species on each plot for each sampling period, and a dot plot (Cleveland 1984) of the observed counts and first order jackknife estimates.
The following is our S-PLUS code. Note that the function, jack.fun, calls the functions species.boxplot and jackone.plot, which are also displayed here. The functions were coded in S-PLUS 5.1 (Insightful Corporation 1999) for Unix operating systems (see, for example, Krause and Olson (2000)). The function has also run successfully in S-PLUS 6.2 on Windows XP. Note that the function will calculate J n (S) and S 0 in R if the plotting sections are removed.
The function jack.fun has five argurments. The first argument, mydata, should be replaced with the name of the data frame containing the data set you wish to use. An example of -qnorm(1-alpha/2)*sqrt(jackone.variance) index.row <-index.row + 1 } headers <-c("Period", "Number of Plots", "Observed", "Jackknife Estimate", "Standard Error", "Lower Limit", "Upper Limit") dimnames(jackone.est) <-list(NULL, headers) ##create box plot of Species per Plot## if (box.plot) species.boxplot(data) if (est.plot) jackone.plot(jackone.est) return(species.unique, jackone.est) } The following is S-PLUS code for the function species.boxplot which is called by the function jack.fun for creating a variable width notched box plots for the number of species per plot for each year. For an example of species.boxplot output see Figure 1.
Example
We provide an example of species richness estimates based on data collected for the Land Condition Trend Analysis (LCTA) project at Fort Riley, KS. The LCTA project monitors the environment at the fort and collects information on soil, vegetation, birds and mammals. The data for birds has been collected from 1991 through 2002 on approximately 60 plots per year (sampling period). The data include the year, the plot, and the species found on each plot for each year. Table 2 contains a list of the number of plots in which unique species occur. Notice that for our data most plots contain zero unique species. In 1991, only 8 plots contained one unique species and no plots contained 2 or 3 unique species. Table 3 displays the year, the number of 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 Species Counts per Plot plots sampled each year, the observed count, S 0 , the jackknife estimate, J n (S), the standard error of J n (S), and the lower and upper limits of the 95% standard normal confidence interval for S based on J n (S). At first glance the standard errors for J n (S) may seem unreasonably small. However, as noted, the numbers of unique species are very small which drives down the variances of J n (S). Figure 1 contains a variable width notched box plots for the number of species per plot for each year. Figure 2 is a dot plot of S 0 and J n (S) for each year.
Summary
We have demonstrated the need for a function which calculates first order jackknife estimates for species richness and how to implement such a function. Note that a function could be written for any order jackknife procedure. However, the calculations for jackknife variance quickly become difficult. Also, based on our experience, the second order jackknife procedure does not give estimates that are much different from the first order jackknife estimates. | 2019-04-18T13:03:44.373Z | 2005-12-05T00:00:00.000 | {
"year": 2005,
"sha1": "da0068da878a4fcd14bb989ec6c2f90e875fbc64",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18637/jss.v015.i03",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "aa4b94c07f668a27ac970741b7cc1b97823c4eb2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
438442 | pes2o/s2orc | v3-fos-license | Technical flaws in multiple-choice questions in the access exam to medical specialties (“examen MIR”) in Spain (2009–2013)
Background The main factor that determines the selection of a medical specialty in Spain after obtaining a medical degree is the MIR (“médico interno residente”, internal medical resident) exam. This exam consists of 235 multiple-choice questions with five options, some of which include images provided in a separate booklet. The aim of this study was to analyze the technical quality of the multiple-choice questions included in the MIR exam over the last five years. Methods All the questions included in the exams from 2009 to 2013 were analyzed. We studied the proportion of questions including clinical vignettes, the number of items related to an image and the presence of technical flaws in the questions. For the analysis of technical flaws, we adapted the National Board of Medical Examiners (NBME) guidelines. We looked for 18 different issues included in the manual, grouped into two categories: issues related to testwiseness and issues related to irrelevant difficulties. Results The final number of questions analyzed was 1,143. The percentage of items based on clinical vignettes increased from 50 % in 2009 to 56-58 % in the following years (2010–2013). The percentage of items based on an image increased progressively from 10 % in 2009 to 15 % in 2012 and 2013. The percentage of items with at least one technical flaw varied between 68 and 72 %. We observed a decrease in the percentage of items with flaws related to testwiseness, from 30 % in 2009 to 20 % in 2012 and 2013. While most of these issues decreased dramatically or even disappeared (such as the imbalance in the correct option numbers), the presence of non-plausible options remained frequent. With regard to technical flaws related to irrelevant difficulties, no improvement was observed; this is especially true with respect to negative stem questions and “hinged” questions. Conclusion The formal quality of the MIR exam items has improved over the last five years with regard to testwiseness. A more detailed revision of the items submitted, checking systematically for the presence of technical flaws, could improve the validity and discriminatory power of the exam, without increasing its difficulty.
Background
The examination that regulates access to medical specialties in Spain is known as the MIR exam (MIR: "Medico interno residente", internal medical resident). The Ministry of Health has designed and organized this exam since 1978. The exam is officially defined as "a nationwide test in which the applicants will receive a total individual score, obtained from the sum of the result of a multiplechoice test, (carried out in the simultaneously established exam rooms in assigned locations in different regions of Spain), and the score derived from their academic merits" [1]. The aim of this exam is to objectively evaluate the applicants' medical knowledge. Therefore, the quality of the exam is of the utmost importance. The MIR exam currently includes 225 multiple-choice questions (with 5 options), plus 10 additional backup questions to replace items excluded by the qualifying committee after the exam (due to refutations from the examinees); each error is penalized with 0.25 points. The exam score accounts for 90 % of the total score, with 10 % of the total score based on academic merits. Thus, the MIR exam is the main factor that determines the priority of the applicants for choosing the specialty and the medical center.
In order to achieve reproducibility, fairness, and validity in the exam, it is not only necessary to have questions related to a wide range of medical topics. The exam must be constructed appropriately so as to avoid possible technical flaws. The NBME defines testwiseness issues as those that "make it easier for some students to answer the question correctly, based on their test-taking skills alone". On the other hand, irrelevant difficulties "make the question difficult for reasons unrelated to the trait that is the focus of assessment." Developing these questions is not an easy job because it requires the specialists who are writing them to have thorough and updated knowledge in their area of expertise, as well as to be skilled in constructing written test questions [2]. A guide for constructing written test questions has been written by the National Board of Medical Examiners (NBME), helping university teachers to improve the quality of their exams [3]. Haladyna et al. published a list of recommendations based on the reviews of scientific evidence, including studies published since 1990 [4].
The aim of this study is to analyze the technical quality of the MIR multiple-choice questions from tests given over the last five years, including both technical flaws that facilitate the answer by using testwiseness, and those related to irrelevant difficulties.
Methods
We analyzed all the questions included in the 2009 to 2013 exams, obtained from the web page of the Ministry of Health [5].
Each exam had 235 multiple-choice questions, with five options. The study was carried out by a group of five medical doctors with different specialties as well as with expertise in constructing and analyzing multiple-choice questions.
In the first place, each question was classified depending on whether or not it included a clinical vignette and whether or not it included an image. The questions that included an epidemiological problem with actual data were considered as including a clinical vignette.
For the analysis of technical flaws, the questions were distributed among researchers based on their clinical expertise. The researchers were expert item writers with training in different medical specialties: Allergology, Neurology, Oncology, Endocrinology and Family Medicine. They had expertise in exam analysis; in less than 1 % of the questions, the researchers asked for help from other specialists (usually to confirm that an option was not plausible). At the beginning of the study, the five researchers got together to analyze a series of items in order to agree on common criteria for assessing the different issues. In addition, several meetings were held to discuss doubts that had arisen, resolving the issues by consensus among the five researchers.
We adapted the NBME guidelines for the analysis [3]. These guidelines group technical flaws into two categories: issues related to testwiseness and issues related to irrelevant difficulties as explained in the introduction.
The different issues included in the manual were grouped into the 18 categories listed in Table 1, by agreement of the five researchers. Their presence or absence was rated by means of a dichotomous scale (yes/no).
Statistical analysis
The data were entered into a Microsoft Excel 2010 Spreadsheet (Microsoft Corp, redmon WA, USA) and exported to STATA 12 for the analysis. We carried out a descriptive and inferential analysis. We use chi-squared tests to compare differences between years (interannual differences in the percentage of vignettes and images) and logistic regression to study tendencies throughout the five years, in the remaining analyses.
Ethics statement
This study did not involve any personal human data.
Results
The final number of questions analyzed was 1,143. A total of 32 questions were not included in the analysis as they had been discarded by the qualifying committee after the refutations from the examinees.
Clinical vignettes and images
The percentage of items based on clinical vignettes increased from 50 % in 2009 to 56-58 % in the following years (2010-2013). The percentage of items based on an image increased progressively from 10 % in 2009 to 15 % in 2012 and 2013. The interannual differences in the percentage of vignettes and images were not significant.
Technical item flaws
There was an overall decrease in the percentage of issues related to testwiseness throughout the five years. The main reduction (31.6 % decrease) was observed between 2010 and 2011, with stable percentages of approximately 20 % in the last years (as shown in Fig. 1a). The decrease in flawed items included most issues, but not all.
We observed no significant variation in the global percentage of issue flaws related to irrelevant difficulties in the five years analyzed, finding stable values of approximately 60 % (Fig. 1b).
Issues related to testwiseness
In the first place, we observed a disproportion in the frequency of each option number in 2009 (when the most frequent correct answers were 3 and 4) that disappeared in 2010 and remained adequate in the following years (Fig. 2). Figure 3 shows the percentage of the other issues related to testwiseness in the five years analyzed. Only the issues that showed significant or near-significant changes throughout the years will be commented on in more detail.
With regard to logical clues (Fig. 3b), there is a significant decrease in the percentage of flawed items over the five years that were studied (p = 0.04).
In Fig. 3d, a significant improvement throughout the five years can also be observed in the "the correct answer is longer, more specific or more complete than other options" category (p < 0.001).
Lack of uniformity in the options was a frequent flaw in the items studied. A highly significant improvement throughout the five years that were analyzed (from 10 to 4 %) is observed in Fig. 3g (p < 0.001).
As shown in Fig. 3h, the most common flaw within this category was the presence of non-plausible options (10-15 % of the items). A non-significant trend (logistic regression p = 0.08) towards an increase in the percentage of flawed items is present.
Item flaws related to irrelevant difficulties Figure 4a shows the percentage of items that cannot be answered without looking at the options, with minimal variation throughout the years. This subtype encompasses most of the flawed items found within this category (irrelevant difficulties), with percentages ranging between 50 and 57 % of the total number of items in the exam. Of note, many of these items also had other technical flaws. Figure 4b shows a slight trend towards an increase in the percentage of items with their answer "hinged" to the answer of a related item, although the increase is not significant.
With regard to negative stems (including "except" or "not" in the lead-in), there is no improvement in the percentage of flawed items; in fact, there is actually a slight increase (Fig. 4c).
The percentage of items with stem or options that are tricky or unnecessarily complicated, with unnecessary information, or which are too complex, significantly improved; the last of these subtypes had a p value of 0.03 (Fig. 4e).
Within the last five years analyzed, no items were found to include the terms "none of the above" or "all of the above" within the options given.
We also looked for writing/orthographic errors and outdated terms in the items; although the percentage was quite low, there were no significant differences throughout the years (p = 0.76).
The maximum number of flaws found in the same question was six. Table 1 Adapted questionnaire from the National Board of Medical Examiners (NBME ® ) guidelines
Issues related to testwiseness
The number of correct option One or more distractors don't follow grammatically from the stem One or more options are collectively exhaustive Terms such as "never" or "always" are used in options The correct answer is longer, more specific, or more complete than other options A word or phrase is included in the stem and in the correct answer The correct answer includes the most elements in common with the other options There is lack of uniformity in the options Some of the distractors are not plausible
Issues related to irrelevant difficulties
The item cannot be answered without looking at the options The answer to an item is "hinged" to the answer of a related item Negative-phrased item ("except" or "not" in the lead-in) Terms in the options are vague (e.g., "rarely," "usually") The stem or the options are tricky or unnecessarily complicated The stem or the options include unnecessary information The stem or the options are too complex, with more than one concept included Options are in an illogical order "None of the above" or "All of the above" are used as an option
Others
Orthographic or syntax errors and use of outdated terms Overall, the percentage of items without any type of technical flaw (according to our analysis) did not vary significantly in the last five years (28 to 32 %).
We carried out an additional analysis comparing the percentage of flawed items in questions including and not including clinical vignettes. The results showed no differences between them (the curves overlapped), except for answers "hinged" to the answer of a previous question, which were more frequent in questions with clinical vignettes. This difference was an expected finding, as most items based on an image also include a clinical vignette, and every image always has two related questions.
Discussion
Our study has found a high percentage of items with technical flaws in the MIR exams in the period 2009-2013. However, most of these flaws were related to irrelevant difficulties and the majority of the flaws related to testwiseness did indeed improve over the fiveyear period (with some of them even disappearing). The percentage of items with any type of technical flaw did not vary significantly over the last five years (68 to 72 %). According to similar studies in the literature: 28 to 75 % [6]; 35.8 to 65 % [7] and 20 % [8], this percentage falls into the high range. In the two last studies the percentage of flawed items was lower, but the number of items studied and the number of possible issues were also lower. In our study, the number of different flaws per item ranged between one and six. In the New England Journal of Medicine (NEJM) Weekly CME Program, multiple-choice questions are used for obtaining continuing medical education (CME) credits. The quality of these questions was analyzed in a recent study, in which all the questions analyzed had between 3 and 7 different types of technical flaws [9].
Technical flaws related to testwiseness favor the use of "tricks", making it easier to answer the questions just by using test-taking abilities. Tarrant et al. observed that borderline students benefit from flawed items [6]. From our point of view, items with these flaws should be systematically rejected or amended because they could severely affect the validity of the test. Actually, most of these flaws can be avoided by following some simple rules. In our study, the main reduction (31.6 %) in the percentage of these flaws was observed in 2011, probably due to a better selection process. The most common flaw within this category, showing no improvement over the time period analyzed, was the presence of non-plausible options. Bonillo [10] analyzed the 2005 and 2006 MIR exams from a psychometric perspective, and demonstrated that one or two of the options given for several different multiple-choice questions were non-functioning. The frequency of this flaw is a good example of the difficulty involved in writing good quality multiple-choice questions. This issue could be improved by dedicating more time and effort to each multiple-choice question, or alternatively, reducing the number of possible answers, as other authors suggest [11,12] Irrelevant difficulties are those not associated with the subject that the question pretends to evaluate. They make the item more difficult, but they do not help to discriminate between students who are knowledgeable and those who are ignorant regarding the subject matter [6]. A more difficult question needs more time to be answered. If the mean time needed to answer each question increases, either the amount of time available for the exam has to be increased, or the number of questions has to be reduced so as to fit within the scheduled testing time. Logically, the size of this effect depends on the proportion of items with these irrelevant difficulties. This effect has little practical relevance if the percentage of flawed questions is small, but it may significantly affect the exam if the proportion is high, as in the case of this study. In fact, the number of multiple-choice questions in the MIR exam had to be reduced by 10 % (from 250 -225, plus 10 reserve items in both cases) since 2009 due to the increased difficulty of the questions. A reduction in the number of items reduces the validity of the exam, limiting the topics that can be evaluated [3]. In addition, lower scores in the test due to increased difficulty [7] indirectly favor the weight of other components for the global score (academic merits). The percentage of weight of academic merits in the global MIR score has been reduced over the past few years. However, this weight reduction could also be indirectly achieved by simply reducing the degree of difficulty of the items in the exam. Some subtypes of these technical flaws deserve more detailed analysis. In our study negative stems, and those One or more options are collectively exhaustive. c: Terms such as "never" or "always" are used in options. d: The correct answer is longer, more specific, or more complete than other options. e: A word or phrase is included in the stem and in the correct answer. f: The correct answer includes the most elements in common with the other options. g: There is lack of uniformity in the options. h: Some of the distractors are not plausible that cannot be answered without looking at all the options (unfocused stem) were the most frequent mistakes; these same types of mistakes were also frequently found in other studies [6,7]. Items with these flaws require more time to be answered. In the former case, more time is required to be able to understand what is being asked and in the latter case, the student is forced to read all the answers offered. In both of these subtypes, the options are usually longer than in non-flawed items, and frequently include different concepts thereby requiring more time to answer [4]. Percentage of questions based on a clinical vignette or on an image increased over the five-year period studied. As commented in the results section, the percentage of item flaws did not increase in the questions with a clinical vignette (with the exception of "hinged" answers). Thus, the inclusion of clinical vignettes does not decrease the technical quality of the exam. On the other hand, previous studies have indicated that clinical vignettes and images add some practical components to the knowledge evaluated in a MCQ exam, including differential diagnosis and integration of knowledge from different areas [13,14]. Taking both facts into account, we think that the increase in the percentage of questions including clinical vignettes is globally positive.
Color images used in the exam are provided in a separate booklet. Typically, image-associated items are presented in pairs (two questions per image); this is probably done for economic reasons, as color printing is expensive. Our study shows that of the 10-15 % of items that are associated to an image, in nearly half of them (4-7 %), the answer is "hinged" to the answer of another item (presumably, the other item related to the same image). This finding implies that "hinging" (if the student does not know the answer to the first question, he/ she cannot answer the second) or "cueing" (one of the questions provides clues to answer the other one) was present in most of the two-item image-related clusters. "Cueing" makes answering a question much easier for students with expertise in test-solving [6,15]. The answer to an item is "hinged" to the answer of a related item. c: Negative-phrased item ("except" or "not" in the lead-in). d: Terms in the options are vague (e.g., "rarely," "usually"). e: The stem or the options are too complex, with more than one concept included. f: Options are in an illogical order gives correct credit to those students that know the answer to the first question (independently of whether they know the answer to the second one), while it prejudices those who do not know the answer to the first one, but would have known the answer to the second if they had known the answer to the first. The use of two items per image may be adequate, but the design of this type of items requires additional effort in order to avoid this kind of technical flaw. If hinging and/or cueing are not avoided, we believe that the cost/benefit use of images in the exam does not justify the inclusion of flawed questions.
A limitation of our work is the lack of individual performance data analysis to complement the technical flaw study. Modern validity theory is a unitary concept that goes beyond the technical quality of the issues [16]. This addition of might have enriched these results increasing their validity. Unfortunately, the data to evaluate individual performance was not publically available.
We agree with Palmer et al. [8] that a well-constructed multiple-choice question meets many of the educational requirements needed for this type of exam, and therefore, we believe that the MCQ format is adequate for the MIR exam. Evaluations are an essential component of learning and they influence the student's approach to a subject. In this regard, the "MIR" exerts an influence on medical students as they shift their learning efforts towards this exam, especially in their last year of medical school [17,18]. When writing multiple-choice questions, the same care should be taken outside the medical schools and the MIR process itself. Evaluators should conjugate their knowledge with the art of creativity when designing such questions [19].
Conclusions
The technical quality of MIR exam items has improved throughout the last five years, especially with regard to flaws related to testwiseness, but there is still quite a bit of room for improvement. In this regard, Medical Education Units may have a fundamental role in the training of evaluators. We suggest three actions that could help improve the future quality of the MIR exam: 1) Instruct the professionals involved in writing and selecting MIR questions regarding how to avoid and detect technical flaws; 2) increase the number of questions submitted in order to have a larger pool for the selection process; 3) insist on the most frequent flaws: negative stems, items that cannot be answered without reading the options, hinged questions, and non-plausible distractors. Indeed, the ministry of Health has recently announced that the number of options will be reduced from 5 to 4 in the next MIR exam in 2016.
Some of the conclusions from our study may also be relevant beyond the MIR exam. Our results suggest that testwiseness issues are easier to eliminate than irrelevancy ones. Also, they indicate that clinical vignettes, which have many advantages in terms of evaluation, do not have a higher proportion of technical flaws. However, the use of two questions per image has a high risk of hanging or cueing flaws. | 2018-01-08T23:55:35.802Z | 2016-02-03T00:00:00.000 | {
"year": 2016,
"sha1": "4eb3301abb0fec5492537301c443331feb5e3e05",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-016-0559-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f591aa2de1cceba49321d4a07482aec2065f815f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255968900 | pes2o/s2orc | v3-fos-license | Associations between depression and the incident risk of obesity in southwest China: A community population prospective cohort study
Objective This study aimed to describe the incidence of obesity and investigate associations between depression and the risk of incident obesity among residents in Southwest China. Methods A 10-year prospective cohort study of 4,745 non-obese adults was conducted in Guizhou, southwest China from 2010 to 2020. Depression was assessed by the Patient Health Questionnaire-9 (PHQ-9) while the obesity was identified by waist circumference (WC) and/or body mass index (BMI). Cox proportional hazard models were used to estimate hazard ratios (HR), and 95% confidence intervals (CIs) of depression and incident obesity. Results A total of 1,115 incident obesity were identified over an average follow-up of 7.19 years, with an incidence of 32.66 per 1,000 PYs for any obesity, 31.14 per 1,000 PYs and 9.40 per 1,000 PYs for abdominal obesity and general obesity, respectively. After adjustment for potential confounding factors, risks of incident abdominal obesity for subjects with minimal (aHR: 1.22, 95% CI: 1.05, 1.43), and mild or more advanced depression (aHR: 1.27, 95% CI: 1.01, 1.62) were statistically higher than those not depressed, while there was no significant association with incident general obesity. The risks of any incident obesity among subjects with minimal (aHR: 1.21, 95% CI: 1.04, 1.40), mild or more advanced depression (aHR: 1.30, 95% CI: 1.03, 1.64) were significantly higher than those not depressed and positive association was found for PHQ score per SD increase (aHR: 1.07, 95%CI: 1.01, 1.13), too. The association was stronger significantly in Han Chinese (minimal: aHR: 1.27, 95% CI: 1.05, 1.52; mild or more advanced: aHR: 1.70, 95% CI: 1.30, 2.21) and farmers (minimal: aHR: 1.64, 95% CI: 1.35, 2.01; mild or more advanced: aHR: 1.82, 95% CI: 1.32, 2.51). Conclusion Depression increased the risk of incident obesity among adults in Southwest China, especially among Han Chinese and farmers. This finding suggests that preventing and controlling depression may benefit the control of incident obesity.
. Introduction
Obesity is one of the critical public health challenges worldwide. With socioeconomic growth and lifestyle changes, World Health Organization reported that the global age-standardized prevalence of overweight and obesity was high as 39 and 13% among adults in 2016 (1). As a significant risk factor for multiple chronic diseases such as diabetes, metabolic syndrome, musculoskeletal disorders, and some cancers, obesity caused 4 million deaths and 120 million disability-adjusted life years worldwide in 2015 (2,3). However, extensive documentation indicated that the distribution of body mass index (BMI) and average waist circumference (WC) have shifted upward (4). Obesity incidence in America had increased more than 3-fold from 5.8 to 14.8% during 1950-2000 (5), while similar trends emerged in numerous low-income and middle-income countries (6). In China, the prevalence of overweight and obesity was 34.3 and 16.4%, which increased by 13.9 and 37.8% from 2012 to 2020 (7).
Overweight and obesity are influenced by socioeconomic status, diet, and environment (2,8). Mental disorders were also frequently mentioned in recent studies (9). Several epidemiological studies have confirmed the complex mechanism of depressionto-obesity pathways, but the evidence was mixed and varied across regions or races. The role of gender was equivocal in the association between depression and obesity, which varied among Chinese adolescents, middle-aged residents, and American (10)(11)(12). Some studies found that non-Latino and white conferred a higher risk of comorbid obesity and mood disorders compared to Latino, African-American, and Asian Socio-cultural factors in different areas may also affect the relationship between obesity and depression (11)(12)(13). A meta-analysis found depressed persons had a 58% increased risk of becoming obese (14), while another Dutch study showed that the presence of baseline depressive symptoms was not prospectively associated with the development of obesity (15). Most studies on the association between depression and incident obesity were cross-sectional studies whose findings varied over age in China (10,12,13,16,17), which could not make the causal association between depression status and obesity.
To our knowledge, prospective cohort studies covering adults to evaluate the risk of incident obesity based on different depressive states have not been reported in China. Based on Guizhou Population Health Cohort (18), this study aimed to explore the association between depression and incident obesity by analyzing the discrepancies in obesity outcomes among people with different depression levels.
. Materials and methods
. . Study design and population
The Guizhou Population Health Cohort Study (GPHCS) was a prospective community-based cohort conducted in Southwest China during 2010-2020 (18). Through the multistage proportional stratified cluster sampling method, 9,280 adult residents from 48 townships of 12 districts in Guizhou Province were recruited from November 2010 to December 2012. The inclusion criteria included: (1) Aged 18 and above; (2) Living in the study area and having no plans to move out; (3) Completing the questionnaire and blood sampling. All participants were subsequently followed up for major chronic diseases and vital status during 2016-2020, with a loss to follow-up rate of 12.04%. We further excluded 1,997 individuals with general and/or abdominal obesity at baseline (with BMI ≥ 28 kg/m 2 or having a waist circumference of ≥85 cm for women, or ≥90 cm for men), 1,245 missing BMI or WC at follow-up, and 176 without sufficient information on depression at baseline. Finally, the remaining 4,745 participants were eligible for the analysis (Figure 1). This study was approved by the Institutional Review Board of Guizhou Province Centre for Disease Control and Prevention (No. s2017-02), and written informed consent was signed by all subjects. All deaths were confirmed through the death registration information system and the essential public health service system.
. . Data collection
Baseline information included sociodemographic characteristics (age, gender, ethnicity, education level, residence, marital status, and occupation), lifestyle (tobacco and alcohol consumption, physical activity), and chronic medical history (hypertension, dyslipidemia, diabetes mellitus, and cardiovascular diseases), which was collected by trained investigators through structured questionnaires via faceto-face interview.
Physical examination data, including height, weight, waist circumference, and blood pressure, were collected by trained investigators through standard procedures. Standing height was measured to the nearest 0.1 cm without shoes using a portable stadiometer. Weight was measured to the nearest 0.1 kg using a digital weighing scale. WC was measured to the nearest 0.1 cm at the midpoint between the lowest rib margin and the iliac crest. Blood pressure data were taken as the average value of three consecutive measurements. Venous blood samples were obtained in the early morning for fasting blood glucose, total cholesterol, highdensity lipoprotein cholesterol, low-density lipoprotein cholesterol, and triglyceride levels after the participants had fasted for at least 8 h.
Above methods for data collection were same as baseline study during the follow-up study.
. . Assessments of depression and obesity
The Patient Health Questionnaire-9 (PHQ-9) was used to measure the levels of depression among participants according to the Diagnostic and Statistical Manual of Mental Disorders criteria (DSM-IV) (19). Subjects needed to answer nine questions which were graded from 0 to 3 and the total score ranged from 0 to 27, in which points = 0 was determined as non-depression, 1-4 points as minimal depression, and ≥5 points as mild or more advanced depression (20).
Body mass index (BMI) was calculated as weight in kg divided by height in m squared and general obesity was defined as BMI ≥ 28 kg/m 2 . Abdominal obesity was defined as a waist circumference of ≥85 cm for women and ≥90 cm for men (21). Obesity was defined if either of them was met. Overweight was defined based on BMI (24.0-27.9 kg/m 2 ) or WC (80-85 cm for women and 85-90 cm for men).
. . Statistical analysis
Continuous variables were expressed in means and standard deviations, and categorical variables were as frequencies with proportions. χ 2 test and Kruskal-Wallis test were used to compare the group differences of variables. Person-year of follow-up was calculated from the baseline survey to the date of confirmed death, obesity appears, or completion of follow-up, whichever came first. We fitted four Cox proportional hazards regression models to estimate hazard ratio (HR), the adjusted HR (aHR), and corresponding 95% confidence interval (CI) to determine the association between depression and the risk of obesity. Model 1: without any adjustment for covariates. Model 2: adjusted for age (<30, 30-59.9, ≥60 years) and gender (male or female). Model 3: model 2 added education (10 years and above), occupation (farmer or other), physical activity (yes or no), marriage (unmarried, married, divorced), family relations (Good, general, poor), alcohol use (yes or no), dietary habit (yes or no). Model 4: model 3 added baseline diabetes (yes or no), baseline hypertension (yes or no), and baseline dyslipidemia (yes or no). We tested for interactions between all target and adjustment variables and further performed stratified analysis if significant interactions were observed. Model 4 was repeated after individuals with overweight at baseline were excluded for sensitivity analyses. Schoenfeld residuals were used to test the hazard proportionality assumption in Cox regression models and no violation of proportionality was found. Two-sided P < 0.05 was considered statistically significant. All statistical procedures were performed in R software (Version 4.0.3; R Foundation for Statistical Computing, Vienna, Austria).
. . Baseline characteristics
Among 4,745 adults in this study, 3,510 were not considered depressed, 922 were minimal depression, and 313 were mild or more advanced depression. Details of the baseline characteristics are shown in Table 1. The average age of all participants was 44.09 ± 15.07 years, and nearly half (48.8%) were male. More than half of them were Han Chinese (59.8%), farmers (54.4%), or had received education for ≥9 years (45.1%). Most (80.8%) were married. Differences between the non-depressed and those with different grades of depression were statistically significant (p < 0.05) in terms of age, gender, ethnicity, education time, occupation, and family relationship (seen in Table 1).
. . Associations between baseline depression and incident obesity
Totally, 4,745 subjects were followed up for 34,138.66 personyears, with an average follow-up of 7.19 ± 1.15 person-years and a maximum of 9.54 person-years. One thousand one hundred fifteen incident obesity were identified with an incidence of 32.66 per 1,000 Frontiers in Public Health frontiersin.org . /fpubh. . PYs, 1,063 (31.14 per 1,000 PYs) and 321 (9.40 per 1,000 PYs) for abdominal obesity and general obesity, respectively. The incidence of obesity was highest in subjects with mild or more advanced depression (38.01 per 1,000 PYs). As shown in Table 2, both model 1 (univariate cox model) and model 2 (adjusted for age and gender) showed that depression was associated with an increased risk of incident obesity. In the fully adjusted models, the aHR was 1.07 for abdominal obesity with per SD increase of PHQ score. Compared with those not depressed (PHQ score = 0), participants with minimal (aHR: 1.22, 95% CI: 1.05, 1.43) and mild or more advanced depression (aHR: 1.27, 95% CI: 1.01, 1.62) remained at higher risks of incident abdominal obesity. Risks of any incident obesity among subjects with minimal (aHR: 1.21, 95% CI: 1.04, 1.40), mild or more advanced depression (aHR: 1.30, 95% CI: 1.03, 1.64) were also significantly higher than those among not depressed participants (seen in Table 2).
. . Stratification analysis
The potential modification effects of age, gender, ethnicity, and occupation on the association of depression with incident obesity were explored in this study (seen in Figure 2). Associations between depression and incident obesity significantly varied over ethnicity and occupation (P for interaction = 0.001 and <0.001, respectively), and risks for incident obesity were statistically higher in Han Chinese or farmers. However, age and gender interactions were not observed.
In the sensitivity analysis, the corresponding effect estimates of baseline depression status on the incident obesity did not change substantially after excluding participants with overweight at baseline (seen in Figure 3).
. Discussion
Based on a prospective cohort study in Southwest China, we found that the incidence rate of incident obesity was high and depression was strongly associated with the risk of incident obesity among this community adult population, especially among Han Chinese and farmers. Our findings indicated that improving depression may help to prevent and control developing obesity.
The incidence rate of abdominal obesity (31.1/1,000 PYs) was higher than general obesity (9.4/1,000 PYs) in this study. Compared with another earlier cohort study whose incidence rate of general obesity was 6.97‰ in China (25), that was lower than our findings and may be driven by economic developments, sociocultural norms and policies with China's rapid urbanization and industrialization (26). Previous studies and WHO data have shown that the prevalence of obesity in many countries doubled and even quadrupled over the last 30 years (1, 27). The critical increase of obesity in China and worldwide called that more vigorous interventions should be implemented for obesity prevention and treatment.
Previous studies have demonstrated a positive association of depression and obesity (10,28,29). Apart from that, the limitation of BMI measures has been proven as unable to discriminate between fat percent and lean mass (30), so weight management guidelines in several countries suggested health professionals consider both BMI and WC to diagnose obesity (31). WC has confirmed exist higher accuracy in the measurement of depression-induced obesity compared with BMI due to the accumulation of visceral adipose (29). In this study, aHR of incident abdominal obesity was a 1.07 per SD (2.04 score) increase in PHQ score, which was comparable with that from two cohort studies in Norway and USA (32,33). Several potential mechanisms on the association between depression and obesity were studied. Apart from the effect on individual perception of weight (34), impaired fat metabolism caused by abnormal secretion of the hypothalamic-pituitary-adrenal axis (32), and intake of antidepressants such as tricyclic and selective serotonin reuptake inhibitors may explain the mechanism of obesity (35,36). Hyperphagia and hypersomnia have been proved to be critical features of atypical depression (37), which lead to weight gain through increased energy intake and circadian rhythm dysregulation (38). A twin study in Washington demonstrated that shared genetic risk might act upon depression and obesity (39), which also should be considered. However, this study did not observe a significant association between depression and incident general obesity.
Several researches indicated that the association between being obese and depressed mood tended to vary across ethnic groups (40), which suggested that the sociocultural differences or ethnicity moderation effects should be considered (36). The increased risk of incident obesity based on BMI and WC was more significant among Han Chinese and farmers in this study, where such ethnic variations may be explained by genetic background and other factors (40,41). Miao and Bouyei were the main of the 1,906 minorities accounted for 40.2% in this study and their genetic background had been reported to exhibit significant differences compared with Han Chinese (42). One study conducted in Guizhou province also pointed .
FIGURE
Interactions between depression and sociodemographic factors on the incident obesity among Chinese adults. Adjusted for age, gender, education, occupation, physical activity, marriage, family relations, alcohol use, dietary habit, hypertension, diabetes mellitus, and dyslipidemia. PHQ-, Patient Health Questionnaire-; aHR, adjusted hazard ratio; % CI, % confidence interval.
that Bouyei people had a lower prevalence of general (4.8 vs. 10.9%) and abdominal obesity (13.6 vs. 26.8%) compared with Han Chinese (43), which was similar to this study. The more significant association among farmers may be related to the low awareness of obesity and depression, and inadequate access to depression-related healthcare services while other occupational groups could be improved by timely management of depression-related symptoms (44). A French study showed that the prevalence of depression combined with obesity was higher in rural areas (45). The presence of effect modification by gender has been reported for the association between depression and obesity, which was not observed in this study. A cohort study in Houston showed that depressed males had a 6-fold increased risk of obesity while another meta-analysis demonstrated that the association was more pronounced in adolescent females (28, 46). Accumulating evidence tends to hold a stronger correlation between depression and obesity among females (11). The divergences of psychological characteristics, interpersonal barriers, and physical predispositions might explain the association among females (39,47,48). Physiological studies indicated that the difference was a consequence of the combined action of stronger immune responses and more inflammatory markers caused by the increment of estrogen (49). Intense mood swings and emotional eating also act as mediators between depression and future weight gain, which were more common among the female (11,48). However, no gender interaction was observed in this study.
To our knowledge, this was the first study to investigate the association between depression and incident obesity among the Chinese community population in southwest China. Strengths of this study were the prospective cohort design with the 10-year followup period and the relatively low loss to follow-up rate. Also, we explored the effects of depression on the risk of incident obesity with anthropometric indices through standardized measurements rather than self-report. Of course, this study had some notable limitations. First, baseline depression was assessed by the PHQ-9 .
/fpubh. . scale without clinical diagnoses. Second, data on depression status and antidepressant use were not collected during the follow-up survey, both of which may bias the findings of this study. Third, some possible confounding factors such as the family history, genetic variants of obesity and energy intake were not collected and controlled well in this study, which should be considered in future studies. Our findings in this southwest Chinese population need to be confirmed by more prospective or intervention studies over different populations. Further studies on clinically diagnosed depression and repeated measures of depression are required to confirm the complex bidirectional association between depression and obesity among Chinese population.
. Conclusions
In conclusion, the long-term prospective study demonstrated that there were high risks in the incident obesity among the Chinese community population in southwest China, and both minimal and mild or more advanced depression increased the risk of developing obesity, especially in Han Chinese and farmers. Our findings further suggest that improving healthcare for depression may benefit to prevent and control the developing obesity, especially for abdominal obesity, in the community settings. Government departments and medical institutions, especially community health service centers should pay more attention to farmers' mental health issues, and developing appropriate community-based depression intervention services to improve their obesity control.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Institutional Review Board of (or Ethics Committee) Guizhou Province Centre for Disease Control and Prevention (No. S2017-02). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
TL and BW: conceptualization, methodology, formal analysis, validation, writing-original draft, and visualization. YC: methodology, data curation, writing-review, and editing. YY and JZ: conceptualization, methodology, supervision, funding acquisition, writing-review, and editing. NW and KX: conceptualization, methodology, data curation, writing-review, and editing. CF: conceptualization, methodology, supervision, resources, writing-review, and editing. All authors contributed to manuscript revision, read, and approved the submitted version. . /fpubh. . | 2023-01-19T21:33:20.643Z | 2023-01-19T00:00:00.000 | {
"year": 2023,
"sha1": "98eae5a3e180b2735b5c5c316bf58067177b1b00",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "98eae5a3e180b2735b5c5c316bf58067177b1b00",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233274048 | pes2o/s2orc | v3-fos-license | Product Characterization and Marketing Strategy of “BREM UNGU”: Rejuvenation of Indonesian Traditional Food with Local Purple Sweet Potato as the Source of Natural Colorant
Solid brem is an indigenous fermented food of Indonesia, which often has typical form of long thick bar, white to yellow in color, sweet-sour taste with cooling sensation, and it is easy to crumble by the presence of water. These unique characteristics are generated through alcoholic fermentation of glutinous rice, followed by filtration, concentration, whipping and dehydration. Although it is continuously produced and sold as regional specialties of Madiun, East Java, most people refuse to consume this solid brem due to its high sugar content and lack of information that describes its potential health benefits. The present study is attempting the possibility of combining glutinous rice with another local food material having well-known health benefit. Here, we utilize the potency of local purple sweet potato (Ipomoea batatas var. Gunung Kawi), being rich in carbohydrate and anthocyanines, to partly substitute the glutinous rice while adding the health benefits of the final product. The present anthocyanins in purple sweet potato has been well-studied, exhibiting antioxidant, anti-inflammation, and hepatoprotective activities. The raw materials were subjected to yeast fermentation for 7 days, and subsequently extracted using manual mechanical press. A series of materials ratio (extract of fermented glutinous rice: purple sweet potato = 30:1, 15:1, 15:2) was determined prior to dehydration of brem, and then the color, sugar content, pH, antioxidant activity, and sensory properties of the resulted product were analyzed. Moreover, the competitive analysis and marketing strategy will also be discussed in order to make sure the sustainability of this new innovation. © 2021 MRCPP Publishing. All rights reserved. http://doi.org/10.33479/ijnp.2021.03.1.10
INTRODUCTION
Indonesia is a country extremely rich in cultural heritages, such as the diverse languages, artworks, dancing, ceremonies, architectures, as well as the traditional food recipes. Almost each tribe and region in Indonesia has their own specialities. One of most famous traditional food in East-Java is the solid brem originated from Madiun Regency. Solid brem is prepared from the extract of fermented glutinous rice which is concentrated, whipped, and solidified upon drying at ambient temperature to generate a sweet final product with a very distinctive taste and flavor.
Almost every family in Madiun has developed their own ancient recipe and continued to produce solid brem to this day.
Although this solid brem has been well-recognized as typical edible souvenir of Madiun, any possible development or modification in the raw materials of brem has become an unsolved mistery until today. Therefore, the local communities fully rely on the use of white sticky rice with slight modification in taste and appearance of brem by adding synthetic colorants and flavor. There are only few reports on material modification of solid brem, i.e. the use of cassava and addition of orange flavor, which have been carried out by Widjanarko et al. [1][2][3][4], but there is no consideration on additional health benefits of the main product. In the other words, glutinous rice is still the only main raw material for solid brem until today.
"BREM UNGU" highlights the presence of natural colors, additional health benefits, and rearranged marketing strategy.
"BREM UNGU" was developed as the reborn product of authentic solid brem of Madiun. Increased proportion of fermented purple sweet potato toward fermented glutinous rice altered the texture of solid brem, but improve the color and antioxidant activity of the final product, being able to increase the overall preferences of panelists. The market potency was approached by re-setting the serving size and packaging design in order to create a competitive product which has uniqueness and bring benefits to local communities.
The glutinuous rice (Oryza sativa var. glutinosa) is a kind of rice cultivated mainly in Southeast and East Asia, Northeastern India, and Bhutan, which is characterized by opaque grains, low amylose content (1-2%), and becomes sticky when cooked. Milled glutinuous rice could be processed into various food products, such as sweets, rice cakes, puffed rice, and rice crackers, being well-adopted by local recipes of communities in most Asian countries [5]. It belongs to the main staple food for most Asian people thus its demand and price are fairly competing. Glutinous rice is one of Indonesia's leading export commodities, especially to Singapore [6].
On the other hand, purple sweet potato, which belongs to dicotyledonous plants, are widely grown and consumed by the people worldwide [7]. Sweet potatoes are the sources of complex carbohydrates, not only simple starches, but also oligosaccharides and dietary fibers, hence they are often consumed by those who want to control the body weight. In addition, the purple sweet potato contains significant amounts of anthocyanins (6.5-29.1 mg/100 g fresh weight) which can be functioned as natural colorant and antioxidant for the development of functional food [8,9]. Consumption of natural antioxidants can help the human body to minimize oxidative damages and prevent premature aging. Interestingly, East Java is also bestowed with certain cultivar of purple sweet potato (Ipomoea batatas var. Gunung Kawi) which is locally grown and could be obtained with cheaper price by local communities [10]. The health benefits of anthocyanins from this cultivar have also been well-investigated and documented [11][12][13].
The present study was aimed to: (i) investigate the best ratio (extract of fermented glutinuous rice: purple sweet potato) to prepare solid brem with preferable color and additional health benefits, and (ii) to determine the marketing strategy for sustainable production. The given name for this newly developed product is "BREM-UNGU" with great hope of bringing Indonesian local heritage to have a global image.
EXPERIMENTAL Preparation of Tapai from Glutinuous Rice and Purple Sweet Potato
The glutinuous rice and tapai starter (Na Kok Liong, NKL) were purchased from local market in Malang, East Java, Indonesia. The rice was weighed two kilograms and rinsed twice under running tap water. Then, the rice was steamed for 35 mins, and intermittently a cup of hot water was poured and the rice was stirred. The steamed rice was cooled down by aerated above a layer of banana leaf on a wide tray, until it reached the ambient temperature (25-30 0 C). Four grams of tapai starter was ground and sprinkled evenly over the steamed sticky rice, then the mixture was packed closely with banana leaf and let to ferment at room temperature for 7 days.
The purple sweet potato var. Gunung Kawi was obtained from local farming in Gunung Kawi, Malang, East Java, Indonesia. The sweet potatoes were peeled and weighed three kilograms, rinsed under running water, and subsequently cut into small pieces (approx..1 cm ´ 1 cm ´ 1 cm). The sweet potatoes were steamed for 35 mins and then cooled down into room temperature. An amount of ground tapai starter (5 grams) was spread above banana leaves, and the steamed sweet potatoes were rolled around to make sure the yeast powder cover the entire surface. The comparable anaerobic fermentation occurred for 6 days in a closed cotainer.
The liquor of tapai was generated after the fermented raw material was collected in a cloth and manually pressed using hands. The liquor of fermented sticky rice and purple sweet potato were contained separately and directly used in further process. In this step, 2,200 mL and 500 mL of tapai extract were obtained from glutinuous rice and purple sweet potato, respectively.
Preparation of Solid Brem
The liquor of tapai from glutinuous rice was boiled and concentrated at 95±5 0 C until it turned clear and viscous. Furthermore, the concentrated liquor was cooled down and whipped using a hand mixer (high speed) for about 30 minutes until the color was turned white. This material was blended with the liquor from fermented purple sweet potato at various ratios (extract of fermented glutinous rice: purple sweet potato = 30:1 (A), 15:1 (B), and 15:2 (C)), and subsequently dried as thin layer by means of drying oven at 50 0 C for 3 days.
Determination of Physical, Chemical, and Sensory Characteristics
The analyses of physical properties consisted of water content and color value, whereas those of chemical properties involved the measurement of pH, sugar content, and antioxidant activity. Moisture content of the brem was determined by means of Moisture Analyzer MOC63u (Shimadzu, Japan). The color value according CIELab System (L* -Lightness, a* -redness, b* -yellowness) was measured by "Color Grab" (Mobile Application) before and after drying, under the same light condition and background. Moreover, small amount of the resulted products (0.5 gram) were dissolved in 5 mL aquadest and the solution was analyzed for pH value as well as sugar content (Refractometer Brix ATC 19003, calibrated). The antioxidant activity of the product was determined using DPPH method, in which 0.25 gram of sample was extracted in 5 mL methanol, vortexed for 10 seconds, and centrifuged. An aliquot of the supernatant (4 mL) was reacted with 1 mL of 0.2 mM DPPH for 30 mins and its absorbance was measured at 517 nm by means of a spectrophotometer (UV-1700, Shimadzu, Japan). The antioxidant activity (%) was calculated according to the formula: [Ao-As]´100/Ao, in which; Ao = absorbance of the blank, As = absorbance of the sample after reaction with DPPH.
In addition, sensory evaluation was carried out to evaluate the hedonic scores of colors, aroma, texture, flavor, as well as the overall preference for each ratio of the treatment. Thirty untrained panelists (20-40 years old) were selected through online questionnaire, being in good health, non-smoking, and willing to participate. The product from each ratio was coded with three-digit random number and the panelists were asked to select the representative score among 7-level hedonic scale (1-dislike extremely, 2-dislike very much, 3-dislike, 4-neither like nor dislike, 5-like, 6-like very much, 7-like extremely).
Development of Traditional Solid Brem and Characteristics of "BREM UNGU"
The processing of solid brem in Madiun, East Java, Indonesia, is absolutely a manifestation of the intelligence of our ancestors. The preparation of solid brem contains the basic principles of food preservation, yeast fermentation, concentration of sugar solution, whipping or foam formation, as well as dehydration. To the best of our knowledge, there is no similar product which has been developed by certain cultures in any other region in the world. It has been known that any commodities rich in starch could be the substrate of yeast fermentation. The closest principle of local fermented product is the making of tapai from cassava. However, there are very limited reports which revealed the basic function of waxy rice as the main raw material of brem. The hint might be provided by the researches related to yeast fermentation or ethanolic fermentation. The amylopectin, the branched fraction of starch which is dominant in waxy rice, was found to be more fermentable than amylose [14,15]. Moreover, several preliminary studies have proven that partly substitution of glutinuous rice with cassava need additional maltodextrin to improve the texture and amount of rendements [2,3].
The additional material in the present study is purple sweet potato which is also low in amylopectin content. Thus, the amount of additional liquor from fermented purple sweet potato become the main limiting factor. Greater ratio of the extract of fermented purple sweet potato will impact on the solidification problem of brem as well as on the use of baking soda. Commercial solid brem is often used baking soda to improve the texture and the drying was then conducted at ambient temperature. The use of baking soda will alter the color of anthocyanins into green. However, in the present study, we have successfully incorporated the extract of fermented purple sweet potato by eliminating the use of baking soda and replace conventional drying at ambient temperature with oven drying at 50 0 C for 2 days. Figure 1 provides the documentation of "BREM UNGU" product preparation steps. Table 1 provides the physical and chemical properties of "BREM UNGU". Based on physical characterization, "BREM UNGU" has low moisture content and water activity, which is an important aspect in food preservation that will be further discussed in the section of HACCP plan. The moisture content of "BREM UNGU" in all ratios were lower than 16% as determined by SNI No. 01-2559-1992 [16] for minimum water content of commercial brem. Furthermore, based on acidity level, the measured pH value for "BREM UNGU" was 4-5 (upon ten times of dilution) and hence compromises the requirement of SNI. High sugar content is one of typical characteristic of solid brem. The sugar content of "BREM UNGU" was 57.22-70.25%. Commercial brem has ~70% of sugar content as the consequences of yeast fermentation and the concentration of tapai liquor [3]. Yeast fermentation is known to breakdown polysaccharides into simple sugars, and hence increase the sweetness of the product. The high sugar content of brem influences the taste and texture of brem which is also easily crumbled when consumed [17]. Besides sugars, other products of yeast fermentation are ethanol and organic acids. The ethanol content can be significantly reduced during concentration of the tapai liquor, whereas the organic acids will remain and give a distinctive flavour of brem. Interestingly, "BREM UNGU" has distinctive color which is originated from anthocyanins of purple sweet potato. Common color of commercial brem is yellow to brownish yellow as the consequences of browning reaction of the sugars. Table 2 shows the discrepancy of color value of "BREM UNGU" in each treatment before and after drying. The representative appearances provide illustration of associated color values. The dehydration of brem caused reduction in lightness, but increase the redness, yellowness, as well as chroma values. The color measurement also revealed the powerfulness of oven drying to replace conventional drying without causing unwanted change of anthocyanins color.
Besides the additional natural color, the presence of anthocyanins was supposed to enhance the antioxidant activity of brem. The antioxidant activities of "BREM UNGU" were 67.65 %, 67.70%, and 67.84% for treatment A, B, and C, respectively. Although the difference was quite small, the increasing antioxidant activity was along with the increased proportion of the liquor from fermented purple sweet potato. Besides the anthocyanins, the present flavonoid and polyphenols in purple sweet potato may also contribute to provide the antioxidant activity [18]. Furthermore, the result of organoleptic evaluation was depicted in Figure 2 as the prediction of consumer acceptance. Most panelists (97.4%) declared that they have tried commercial brem, and 66.7% stated they have never found brem modified color. Generally, all treatments of "BREM UNGU" are acceptable for the consumers, since the value of hedonic scores refer to "like" to "like very much". The treatment B, having the ratio of extract of glutinuous rice to that of purple sweet potato 15:1 has superior acceptance based on its color, flavor, and overall preference, thus can be selected for further production plan of "BREM UNGU".
Marketing Strategy
The competitor analysis was firstly determined to ensure the position of "BREM UNGU" among sweets product in the market. Figure 3 depicts the competitive framework for sweets in current markets for both traditional and modern products. Many sweets are now developed and sold, whether having or without any health claim, from either made of fermented or non-fermented material. Most crowded market is found in the sweets which has no health claim and use non-fermented material, such as jelly candies, hard candies, marshmallow, etc. Then, some typical local sweets could be found in the market with health claim and made from non-fermented material, in which the manufacturers incorporate some famous natural compounds bearing biological activities, i.e. the extract of tamarind, eucalyptus, ginger, and curcuma. On the other hand, several traditional sweets have been produced from fermented material but have no health claim, such as "Madumongso" from fermented black waxy rice, as well as "Suwar-Suwir" and "Permen Tape" from fermented cassava. Interestingly, until today, no sweets were found to have certain health claim and made of fermented material. This empty space is specifically targeted in the development of "BREM UNGU". According to consumers survey, 71.8% panelists stated that they have consumed brem more than once, and 61.5% panelists declared that they liked these typical Indonesian traditional sweets. Therefore, the local market opportunity is available for further development of brem. In addition, more than 50% respondents answered that they have not known the health benefits of anthocyanins. This finding should be resolved by adding information of the bioactivities of anthocyanins of purple sweet potato in the packaging box of "BREM UNGU". The packaging design for this product is given in Figure 4. The reasonable price for each packed product is IDR 15,000 which contains 8 individual sweets. This price has included 30% markup for each box. The packaging strategy for "BREM UNGU" is mainly aimed to perform a repositioning for conventional brem to become modern sweets that hopefully could have better market opportunity both in local and international. The grand marketing strategy for "BREM UNGU" is given in Figure 5. We do hope to explore the indigenous healthy food for promoting the uniqueness and richness of Indonesia. The materials of "BREM UNGU" are totally originated from local produce and further processed according to traditional recipe. The additional health claim of brem is expected to promote the market value for both brem as well as local purple sweet potato, and finally will increase the utility of local resources and the whealth of local farmers.
CONCLUSION
The best ratio for the extract of fermented glutinuous rice to that of purple sweet potato to prepare "BREM UNGU" is 15:1 (treatment B), having the average organoleptic score of 6.1 in color (liked very much), 6 in taste (liked very much), 5.1 in aroma (liked), and overall preference of 6 (liked very much). This product exhibits antioxidant activity as 67.70% of DPPH scavenging acitivity. In order to ensure sustainable production of this newly developed product, market analysis was carried out according to competitor analysis and triangle strategy, having the speciality in the region of sweets product with health claim and made of fermented material. The continuous development of "BREM UNGU" should be maintained as the effort to preserve cultural heritage, increase the exploration of local natural resources in Mount Kawi, and bringing traditional heritage to have a global image. | 2021-04-16T19:21:21.431Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "840eaa623b04e94cf7de8912537aad8545012bfa",
"oa_license": null,
"oa_url": "https://doi.org/10.33479/ijnp.2021.03.1.10-15",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "840eaa623b04e94cf7de8912537aad8545012bfa",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119513427 | pes2o/s2orc | v3-fos-license | Linear optics quantum Toffoli and Fredkin gates
We design linear optics multiqubit quantum logic gates. We assume the traditional encoding of a qubit onto state of a single photon in two modes (e.g. spatial or polarization). We suggest schemes allowing direct probabilistic realization of the fundamental Toffoli and Fredkin gates without resorting to a sequence of single- and two-qubit gates. This yields more compact schemes and potentially reduces the number of ancilla photons. The proposed setups involve passive linear optics, sources of auxiliary single photons or maximally entangled pairs of photons, and single-photon detectors. In particular, we propose an interferometric implementation of the Toffoli gate in the coincidence basis, which does not require any ancilla photons and is experimentally feasible with current technology.
I. INTRODUCTION
Quantum information theory [1] exploits the laws of quantum mechanics to devise novel methods of information processing and transmission that would be impossible or very hard to achieve classically. During recent years various protocols for quantum information processing were successfully demonstrated experimentally with several different physical systems. Particular attention has been paid to optical realizations where the quantum bits are encoded onto states of single photons. Photons are ideal carriers of quantum information because they can be distributed over long distances in low-loss optical fibers or in free space. While perfect for quantum communication purposes, photons seemed to be less suitable for quantum computing because the lack of sufficiently strong optical nonlinearities seemed to prevent the implementation of quantum gates between photons.
The situation changed radically in 2001 when Knill, Laflamme, and Milburn (KLM) published their landmark paper in which they showed that a scalable universal quantum computation is possible with only single photon sources, passive linear optical interferometers and single photon detectors [2]. The key insight of KLM is that the nonlinearity (such as a Kerr effect) can be simulated on a single-photon level using the above listed resources, conditioning on particular measurement outcomes of the detectors and applying appropriate feedback. The resulting linear optics quantum gates [3] are generally only probabilistic but the probability of success could be in principle made arbitrary close to unity by exploiting offline generated multi-photon entangled states and quantum teleportation [4,5,6].
The KLM paper stimulated a number of further works suggesting alternative and improved constructions of the basic quantum C-NOT gate [7,8,9,10,11,12] whose experimental demonstrations by several groups followed [13,14,15,16,17,18,19,20,21]. However, despite these promising successes, extending this approach to more complex schemes involving higher number of photons currently appears to be a formidable experimental task because the overhead in resources (in particular the number of ancilla photons) required by the original KLM scheme is very high.
It is possible to combine the ideas of one-way quantum computation [22,23] and linear optics quantum computing to significantly reduce the resources required for the computation. The techniques introduced by KLM could be used to generate a multiphoton cluster state which then serves as a resource for quantum computing which proceeds by performing certain carefully chosen measurements on each photon from the cluster [24,25,26,27]. First proof-of-principle experimental demonstration of one-way quantum computation with four-photon cluster state has been reported recently [28].
In this paper, we wish to address a different aspect of the quantum computing with linear optics. Namely, we will be interested in the implementations of the fundamental Toffoli and Fredkin gate which play an important role both in classical (reversible) computing and in quantum computing and information processing [1]. In the universal quantum computer the multi-qubit gates are usually assumed to be implemented as a sequence of single and two-qubit gates. However, this strategy may not be optimal in the context of linear optics quantum computing, where schemes tailored specifically for multiqubit gates may require less ancilla photons or achieve higher probability of success than implementations relying on a sequence of the single and two-qubit gates.
The rest of the paper is organized as follows. In Section II we will present a scheme for the N -qubit generalized Toffoli gate, which flips the state of the N th qubit in the computational basis if all the N − 1 control qubits are in state |1 , where tilde indicates the logical qubit states throughout the paper to distinguish them from the Fock states |n . We will first design gate operating in the so-called coincidence basis [29], which has the advantage that this scheme does not require any ancilla photons. To make this gate non-destructive it is necessary to perform quantum non-demolition measurement of number of photons at the output of the gate, which could be done with linear optics, ancilla photons and photon-number resolv- ing detectors [30]. Section III is devoted to the threequbit Fredkin gate which is a controlled SWAP gate, the states of the two target qubits are swapped if the control qubit is in state |1 . Our scheme requires only six ancilla photons. Finally, Section IV contains brief conclusions and summary of the main results.
II. QUANTUM TOFFOLI GATE WITH LINEAR OPTICS
In this section we will present and analyze the scheme which realizes quantum N -qubit Toffoli gate in the coincidence basis. More precisely, the scheme conditionally applies the N -qubit controlled phase gate to the input qubits, whereby the phase changes by π if all qubits are in the state |1 and does not change otherwise, where j k ∈ {0,1} and k = 1, . . . , N . Note that the N-qubit controlled-phase (C-phase) gate and the Toffoli gate are equivalent up to single-qubit Hadamard transformations H on the N th qubit (the target), A. Quantum optical C-phase gate The proposed optical setup is schematically sketched in Fig. 1. The qubits are encoded into states of single photons. We assume the dual-rail encoding where the two logical levels |0 j and |1 j correspond to two paths j L and j R taken by a photon, |0 j = |01 jLjR and |1 j = |10 jLjR . Note that other encodings such as polarization or timebin are also possible and can be mutually converted into each other by means of polarizing beam splitters and unbalanced interferometers.
The operation of the C-phase gate requires that the phase of the N -photon state changes by π if and only if all photons are in the L modes. In the proposed scheme this is achieved by the N -photon interference on an array of unbalanced beam splitters, see Fig. 1. First, each photon propagating in the L mode is split into two modes on a beam splitter with intensity transmittance T 1 . Then pairs of beams originating from modes j L and (j + 1) L interfere on an array of N beam splitters with transmittance T 2 . Each mode j R passes through a beam splitter with transmittance T 3 which acts as a filter balancing the amplitudes of the modes j L and j R after the application of the gate.
The gate operates in the coincidence basis, i.e. it succeeds if a single photon is detected in each pair of the output modes j L and j R . Assume first that at least one photon is in mode k R . It is easy to see that in this case the only way how the photons in the j L modes can reach the appropriate output ports of the beam splitters is that each photon is transmitted through both beam splitters and the total probability amplitude of this to happen reads a n = (t 1 t 2 ) N −n t n 3 , where n ≥ 1 is the number of photons in k R modes and t j is the amplitude transmittance, T j = t 2 j . Since the gate should be unitary, a n must not depend on n which can be achieved by choosing The situation changes when all photons are initially in L modes, i.e. the input state reads |1,1,1, . . . ,1 . In this case, there are two ways how the photons can reach the N output ports. One option is that all photons are transmitted through all beam splitters. The second option is is that all photons are reflected from all beam splitters. Provided that these two alternatives are indistinguishable, i.e. there is a good spatiotemporal overlap of the photonic wavepackets on the beam splitters, they interfere, and the resulting amplitude reads, Note the minus sign which arises due to the π phase shift ψ = π in one path of the reflected photon, see Fig. 1. Note also that in the figure the path of the reflected photon in mode N L looks much longer than all other paths. In the actual implementation the geometry of the setup should be such that all paths would be carefully balanced resulting in a good overlap of the photons and high-visibility interference.
The gate operates as desired if a 0 = −a n>0 = −t N 1 t N 2 . Expressed in terms of the intensity transmittances this condition translates into where we used that r j = 1 − T j . The formula (4) describes a single-parametric class of the N-qubit optical controlled-phase gates working in the coincidence basis.
The probability of success is given by P succ = |a n | 2 = T N 1 T N 2 and on expressing T 2 in terms of T 1 from Eq. (4), we obtain The optimal T 1 maximizing P succ can be easily determined by solving dP succ /dT 1 = 0, which yields On inserting this value into Eq. (5) we find that T 2,opt = T 1,opt hence it is optimal to use a scheme where the transmittances T 1 and T 2 are the same. The optimal probability then reads, In particular, for N = 3 we find P N =3 succ,opt ≈ 0.75%.
B. Generalized C-phase gate
The transformation (1) can be extended such that an arbitrary phase shift φ is introduced when all qubits are in logical state |1 . The generalized controlled-phase gate thus acts as follows, We shall show that also this operation can be conditionally implemented with the scheme shown in Fig. 1 provided that the phase shift ψ in one arm of the multiphoton interferometer and the transmittances T 1 and T 2 are properly chosen. Repeating the derivation outlined in the preceding subsection we find that the condition that has to be satisfied reads Upon splitting this formula into the real and imaginary parts and solving for ψ we obtain Similarly we also arrive at a generalization of the formula (4), The probability of success is maximized by choosing Equivalence between two-qubit controlled unitary and controlled phase gates.
and we have P succ,opt Note that the probability of success depends on the required conditional phase shift φ and for a fixed N it achieves its minimum for φ = π, i.e. when we attempt to implement the N-qubit Toffoli gate.
With the two-qubit generalized C-phase gate at hand we can implement in the coincidence basis an arbitrary two-qubit controlled-U gate, where a unitary operation U is applied to the target qubit iff the control qubit is in state |1 C . The equivalent scheme involving C-phase gate is shown in Fig. 2(b). Note first that in the basis of eigenstates |u j T of U , the controlled unitary gate boils down to conditional phase shifts, The unitary V maps the eigenstates of U onto the computational basis states, V |u j T = |j T , j =0,1. Next a C-phase gate with ∆φ = φ 1 − φ 0 follows. Finally, the inverse operation V † is applied to the target while the control qubit is subject to a phase shift operation |0 C → |0 C , |1 C → e iφ0 |1 C . It is easy to see that the net result of this sequence of gates is the controlled-U operation (14).
C. Heralded controlled-phase gate
The advantage of working in the coincidence basis is that no extra ancillary photons are required. The scheme is thus very economical in resources and for instance the demonstration of the three-qubit Toffoli gate would require detection of three-photon coincidences which is well within the scope of present technology.
However, this approach also suffers from a significant disadvantage, since we do not know whether the gate succeeded until we detect the photons. It is thus not possible to directly employ this gate as a part of a more complex quantum information processing network. Nevertheless, it is possible to remove this drawback by performing quantum non-demolition measurements of the number of photons at the outputs of the gate. If this measurement verifies that a single photon is present in each pair of modes j L and j R then we know that the gate was applied successfully while the non-demolition character of this measurement guarantees that the output photons emerging from the gate are preserved and not destroyed by the verification.
A simple way of performing the non-demolition measurement of a number of photons in two modes is to employ an auxiliary pair of photons in a maximally entangled state and attempt to teleport the single photon in modes j L and j R [30]. The single-photon detectors used for the partial Bell measurement which lies at the heart of the teleportation [5] must be able to resolve the number of photons. Detection of exactly two photons in the Bell analysis confirms that a single photon has been successfully teleported. Observation of any other total number of photons indicates a failure of the gate. This method requires N auxiliary maximally entangled photon pairs in total and the probability of successful non-demolition measurement given that the gate was applied successfully scales as 1/2 N because the optimal partial Bell measurement with linear optics can distinguish only two out of four Bell states.
An alternative scheme for partial probabilistic nondemolition photon number measurement on a pair of modes has been proposed in [30] (see also discussion in Ref. [31]). The advantage of this latter scheme is that it does not rely on maximally entangled photon pairs and instead requires single photons in product state, which may be easier to generate. The measurement requires two ancilla photons and two photodetectors which can distinguish the number of photons in a mode. A coincidence detection of a single photon by each detector indicates that at least a single photon has been present in the input pair of modes and if exactly a single photon was at the input then its state was not disturbed by the measurement. If this partial measurement is carried out on each pair of modes j L , j R and if all N measurements indicate that there was at least a single photon in each pair of modes then since there were altogether N photons at the input of the gate we can conclude that the C-phase gate was applied successfully.
III. FREDKIN GATE
Our scheme for linear optics Fredkin gate is inspired by the quantum optical Fredkin gate originally proposed by Milburn [32]. The Fredkin gate is a controlled SWAP operating on the Hilbert space of three qubits, the states of qubits A and B are exchanged if the control qubit C is in state |1 and nothing happens if it is in state |0 . Let us assume that the qubits are encoded onto polarization states of single photons. The controlled SWAP operation can be converted to the controlled phase shift with the use of a balanced Mach-Zehnder interferometer, see Fig. 3. Depending on the state of the control photon C, the phase shift in the left arm of the interferometer should be either 0 or π, the latter results in the effective swap of the photons A and B at the output of the interferometer. In Milburn's scheme, the controlled phase shift is achieved by medium with cross Kerr nonlinearity.
In the spirit of linear optics quantum computing, we suggest to replace the Kerr medium with a linear interferometric scheme, and employ ancilla photons and postselection conditioned on single photon detection to simulate the required cross Kerr interaction [33]. Since we assume polarization encoding, the conditional phase shift has to be applied to both vertically and horizontally polarized modes in the left arm of the Mach-Zehnder interferometer in Fig. 3. In what follows we will describe a scheme which provides this conditional phase shift for a single mode, and the linear optics Fredkin gate then involves two such basic blocks acting in series on vertically and horizontally polarized mode, see Fig. 3. The basic block is depicted in detail in Fig. 4. The scheme requires three ancilla photons: a maximally entangled pair of photons in a state 1 √ 2 (|V V +|HH ) emitted by source (EPR) and an additional single photon in mode 2. The proposed setup consists of two main parts. The first part is the quantum parity check [8,13] between the control photon in mode C and one photon from the auxiliary EPR beam. The check is based on a coupling of these photons on a polarizing beam splitter PBS followed by a detection of one of the outputs in the basis 1 √ 2 (|V ± |H ). The detectors should be able to resolve the number of photons in the beam and the parity check is successful if a single photon is detected by one detector and no photon is detected by the other detector, which happens with probability 1/2. The parity check effectively copies in the computational basis the state of the control photon in spatial mode C onto the auxiliary photon in spatial modes 3 and 4 and we can write, The parity check allows us to control the phase shift of mode 1 indirectly by the auxiliary photon while preserving the original control photon. A similar trick has been used in the recent experimental implementations of quantum C-NOT gate [14,17,18].
The second part of the scheme in Fig. 4 consists of a linear interferometer where the photons in the mode 1 are combined with the ancilla photons in modes 2, 3 and 4. Note that the interferometer has also three other aux-iliary input ports in vacuum state. All output modes of the interferometer except for mode 1 are monitored with photon number resolving detectors and the conditional phase shift is successfully applied if a single photon is detected in modes 2 and 3 and no photons are observed in the other modes.
The purpose of the interferometer is to conditionally induce a phase shift π in mode 1 provided that there is a photon in the input mode 4 and induce no shift if the photon is in mode 3. Since there can be no more than 2 photons in the mode 1 (the two photons whose polarization states should be conditionally swapped), it suffices to achieve the correct conditional phase shift in the subspace of Fock states |0 1 , |1 1 and |2 1 .
Mathematically, the interferometer is described by a unitary matrix U , which governs the transformation between input and output modes. The input creation operators a † in,j can be expressed as linear superpositions of the output creation operators a † out,k according to where u jk are the elements of U . Since we condition on observing no photons in modes 4-7, in our subsequent calculations we will explicitly need only the coefficients u jk with j = 1, 2, 3, 4 and k = 1, 2, 3.
Due to the linearity we can treat separately the cases when the control photon is in state |H and |V . Assume first that it is in state |H . The two auxiliary photons are then in input modes 2 and 3 and conditionally on detecting a single photon in the output modes 2 and 3 and no photon is all other modes (except for mode 1 which is not measured upon), we obtain the following transformation, where the coefficients x j can be expressed as follows, If the control photon is in state |V , then the conditional transformation reads where the coefficients y j can be expressed in the same way as x j , only the matrix elements u 3k in Eq. (18) must be replaced with u 4k .
We want to implement a conditional π-phase shift in mode 1. This will be achieved if where q < 1 is some shrinking factor arising due to the probabilistic nature of the gate. Low q reduces the probability of success of the gate which scales as P ∝ |q| 2 but does not alter its operation. The maximum q that can be chosen is determined by the constraint that u jk (1 ≤ j ≤ 4, 1 ≤ k ≤ 3) must form a submatrix of a unitary matrix. As shown in the Appendix, it is possible to efficiently numerically determine whether a given set of u jk may form a submatrix of U so that also the maximum q can be determined numerically. By solving the system of nonlinear equations (20) we can find the matrix elements u jk specifying the interferometer which implements the conditional phase shift. Note that the system is underdetermined hence there exist infinitely many interferometers satisfying (20). To see this, it is convenient to rewrite these equations in a matrix form, where u 3 = (u 31 , u 32 , u 33 ) T and u 4 = (u 41 , u 42 , u 43 ) T are column vectors and M is a matrix whose elements can be expressed in terms of u 1k and u 2k . Thus when looking for the solution to Eqs. (21) we can choose arbitrary u 1k and u 2k and provided that det M = 0 we can for a given q calculate u 3 and u 4 by solving the system of linear equations (21).
Let us now present a particular example of an analytical solution. Choosing we can express the matrix elements u 3k as follows, Similar formulas hold also for u 4k and we have, The maximum |q| achievable within the above given analytical solution was determined numerically and we found that it is optimum to choose u 11,opt = 0.494 and u 22,opt = 0.416 yielding q opt = 0.0638. Since the Fredkin gate in Fig. 3 includes two conditional phase shift gates, the total probability of success of the gate is given by P succ = 1 4 |q| 4 where the factor 1 4 appears due to the two quantum parity checks. On inserting the q opt into this formula we obtain P succ ≈ 4.2 × 10 −6 , which is rather small. However, it should be stressed that this is not the maximum probability of success that could be attained with our scheme. It is possible to improve the success rate by several orders of magnitude by performing numerical optimization over all relevant parameters u 1k and u 2k . We have carried out a thorough numerical search and the maximum P succ that we obtained in this way reads P succ,max = 4.1 × 10 −3 .
In is instructive to compare this value with the probability of success that could be achieved if one would attempt to implement the Fredkin gate as a sequence of two-qubit unitaries. It was shown by Smolin and DiVincenzo that five two-qubit quantum gates suffice to implement the Fredkin gate [34]. Making the very optimistic assumption that using two ancilla photons per gate each of these gates can be implemented with probability 1/4 similarly as the C-NOT [17,18] we arrive at a total probability P ′ succ = 4 −5 ≈ 9.8 × 10 −4 . Thus our scheme for Fredkin gate could potentially attain a higher probability of success while being more economical in resources because it requires only 6 ancilla photons instead of 10 photons.
IV. CONCLUSIONS
We have devised schemes for linear optics quantum Toffoli and Fredkin gates. In the spirit of linear optics quantum computing the gates do not require nonlinear interaction and instead rely on multiphoton interference, ancilla photons and postselection conditioned on singlephoton detection. The key feature of the proposed setups is that they are directly tailored for the implementation of the multiqubit Toffoli or Fredkin gate. This should be contrasted with the common approaches where the multiqubit gates are decomposed into a sequence of twoand single-qubit gates.
Given the current state of the technology, our direct approach to multiqubit gates may be much more efficient than implementations relying on a sequence of two-qubit gates. In particular, the experimental demonstration of the three-qubit quantum Toffoli gate in the coincidence basis would require only three photons and an observation of a three-photon coincidences, which is well within the reach of current technology.
Despite their advantages, the present schemes still suffer from some weaknesses. The probability of success of the N-qubit Toffoli gate exponentially decreases with growing N and also the probability of success of the Fredkin gate, P succ ≈ 4.1 × 10 −3 , is not very high. Another drawback lies in the fact that setups for both Toffoli and Fredkin gate require interferometric stability, which is hard to achieve and maintain. In contrast, recent experimental demonstrations of the quantum linear-optical C-NOT gate relied solely on Hong-Ou-Mandel interference effect [19,20,21], which is much more robust against small length fluctuations.
It remains an interesting open question whether a scheme avoiding problems with interferometric stability could be devised also for the Toffoli and Fredkin gates. Another important open issue is what is the maximum achievable success rate for these gates either without ancillas (i.e. operating in the coincidence basis) or with a given fixed amount of auxiliary photons. We hope that the present paper will stimulate further theoretical as well as experimental investigations along these lines potentially resulting in an important step towards linear optics quantum computing.
Acknowledgments
The author would like to thank L. Mišta for helpful discussions and comments. This work was supported under the Research project Measurement and Information in Optics MSM 6198959213 of the Czech Ministry of Education.
APPENDIX: TESTING THE UNITARITY
Here we show how to test whether a 4 × 3 matrix u jk , 1 ≤ j ≤ 4, 1 ≤ k ≤ 3, could be a submatrix of a larger 7 × 7 unitary matrix U . The procedure consists of two steps which have to be repeated for each j ∈ {1, 2, 3, 4}. We start from j = 1.
If this expression yields purely imaginary u j,j+3 then u jk could not form a part of a unitary matrix and we terminate the test.
Step (ii): We must guarantee that the rows of U are mutually orthogonal. This can be achieved by calculating u k,j+3 from the orthogonality condition l u * jl u kl = 0, j = k.
For all k satisfying j < k ≤ 4 we thus have Finally, we increase j by 1.
The steps (i) and (ii) have to be repeated until j = 4 is reached. If all four iterations succeed then at the end we obtain a 4 × 7 isometry matrix that could be easily completed to form a unitary matrix. Otherwise we know from step (i) that the matrix u jk could not form a submatrix of a unitary matrix. | 2019-04-14T03:21:52.128Z | 2006-02-27T00:00:00.000 | {
"year": 2006,
"sha1": "ca36ab61c7b60ba037daf4764545c1ecfcaa6bf4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0602220",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d4662875576acca7e8de6e2c181133a3f53e99d4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3203589 | pes2o/s2orc | v3-fos-license | Radiosurgery with Chemotherapy as an Alternative to RT for Glioblastoma Multiforme Patients 65 Years Old or Older : A Prospective Review of 40 Patients
Introduction: Elderly patients have historically been treated with conventional external beam radiation therapy (EBRT) for glioblastoma multiforme (GBM). Methods: We reviewed the results of treatment approaches, which included surgery, chemotherapy, and Gamma Knife (GK) radiosurgery, as an alternative to EBRT in this cohort of patients. Patients were treated during the period of 1999-2010 and were 65 years of age and older with histologically confirmed GBM. Results: Forty patients, 65 years of age and older without previous radiation therapy (RT), were identified. Median age was 75 years (range 65-95). Median overall survival (OS) time was 10.9 months. Analysis showed that patients treated with GK, surgical resection, and chemotherapy (median OS of 14.2) had a significantly higher (P=.03) OS than patients treated with GK, chemotherapy, and no surgical resection (median OS of 8.9 months) and than patients with GK, surgical resection, and no chemotherapy (median OS of 5.37 months). Conclusions: In this study of glioblastoma patients over the age of 65 years with no previous RT, aggressive treatment with Gamma Knife radiosurgery, chemotherapy, and surgery is associated with improved OS. Categories: Radiation Oncology, Neurosurgery
Introduction
Glioblastoma multiforme (GBM) is the most common histological type of primary brain tumor in adults.It accounts for more than half of all adult primary brain tumors diagnosed annually in the United States [1].GBM is most prevalent in patients 65-84 years of age, although this is the group that has least been studied in the literature [2].An increase in the diagnosis of GBM's has been noted across the country.It is believed that stereotactic biopsy and the improvement and increased usage of imaging techniques have contributed to this upwards trend [3].For those diagnosed, the current standard of care is surgical resection along with radiotherapy postoperatively [4].The optimal management of this CNS neoplasm in the elderly is still a matter of debate, with several combinations beings studied.A number of small, prospective studies have shown the benefit of a tri-modal approach involving chemotherapy, radiation therapy, and surgical resection in elderly patients [4][5][6][7][8].Most recently, RTOG studied the use of upfront stereotactic radiosurgery, radiation therapy, and carmustine (BCNU) versus radiation therapy and BCNU alone.The results did not demonstrate additional difference in survival [9].
We are unaware of a study that studied radiosurgery alone with chemotherapy.We are interested in determining if our experience using Gamma Knife, chemotherapy, and surgery to treat GBM in patients 65 years old and older is comparable to the current standard of care with overall survival as the primary endpoint of this prospective study.
Treatments
All patients had a pathologically confirmed diagnosis of GBM.Date of diagnosis was defined as the date of biopsy or surgery.We defined all surgical interventions beyond biopsy as "surgery", except as otherwise stated.We did not differentiate between a total or subtotal resection.The model Gamma Knife used changed throughout the course of the study, first beginning with the Model U, followed by the Model C, and currently, the Perfexion.
Statistical analysis
We defined overall survival (OS) time as the primary endpoint of this retrospective study.OS was measured from the date of diagnosis of glioblastoma multiforme.We performed uni-and multi-variate analyses to determine the differences in OS between our treatment modalities and our patients' KPS scores pre-treatment.We set the P value as .05as the criterion for significance.
All tests of statistical significance were two-sided and all analyses were performed using Microsoft Excel for Mac 2011 (Microsoft Inc.).
Results
A full listing of patient and treatment characteristics can be found in Table 1.The mean age was 73.1 years (SD 6.9 years; range, 65-95), and the gender distribution was slightly skewed in favor of females (55% vs 45 %).All patients received Gamma Knife radiosurgery in a single treatment; some had multiple treatments to treat tumor recurrence or new lesions.The medial total dose was 11 Gy with a median of 9 isocenters to the 45% line.Chemotherapeutic agents were limited to 25 of the 40 patients (62.5%).They received avastin (4%), temozolomide (80%), avastin + temozolomide (12%), and irinotecan + avastin (4%).Surgically, all patients received at least biopsy, and 21 (52.5%) received total or subtotal resection.In the patients who entered treatment with a KPS score of 80 or better, their median KPS improved from an 80 to a 90.For those patients entering treatment with a KPS score of 70 or worse, they maintained a median KPS value of 70 before and after treatment.For the same group of patients that began treatment with a KPS of 80 or better, their median OS was 11.79 months as compared to 8.25 months for the group that began with 70 or worse (P=.071).OS broken down by KPS can be found in Figure 1B.Analysis showed that patients treated with GK, surgical resection, and chemotherapy (median OS of 14.2) had a significantly higher (P=.03)OS than patients treated with GK, chemotherapy, and no surgical resection (median OS of 8.9 months) and than patients with GK, surgical resection, and no chemotherapy (median OS of 5.37 months).OS for all patients was 10.9 months (Figure 1).OS by treatment modality can be found in Figure 1C.
Discussion
The treatment of GBM in elderly populations has largely been left up to the discretion of physicians and treatment centers, due to the lack of a robust body of data and consensus.Different modalities have been employed in several combinations, ranging from conservative to aggressive.The standard of care currently is surgical resection combined with radiotherapy postoperatively.Mohan, et al. demonstrated that radiation was of value in increasing median OS [10].Roa, et al. demonstrated that there was no statistical difference between standard radiation therapy (60 Gy in 30 fractions) or a shorter course (40Gy in 15 fractions) [11].In studies on external beam radiation for GBM, Scott, et al. demonstrated that a combination of chemotherapy, surgical resection and fractionated RT in elderly patients was associated with a median OS of 13.3 months [12].Our study results strongly support the use of Gamma Knife (GK) radiosurgery as an alternative to fractionated external beam radiation therapy.
The use of adjuvant chemotherapy as a standard of care for newly diagnosed patients with GBM was established by the study conducted by Stupp, et al. [13].The addition of temozolomide to radiotherapy for newly diagnosed glioblastoma resulted in a clinically meaningful and statistically significant survival benefit with minimal additional toxicity.In elderly patients with GBM, the addition of chemotherapy to surgical and radiotherapy protocols was shown to be beneficial by Brandes, et al [6].The trial showed that the addition of chemotherapy confers significant benefit to patients 65 or older.Our study results are in agreement with their conclusions.Although we did not parse the differences between different chemotherapeutic agents, our study showed that its addition increased OS for patients treated with GK and surgical resection (median of OS 14.3) from those who did not receive chemo (median OS of 5.37).
Randomized studies established the role of radiotherapy as part of the treatment of GBM in the [14][15].This role has been confirmed in a more recently systematic review [16].Building upon this evidence, dose escalation studies were performed but failed to show a benefit after radiosurgery in the setting of recurrent GBM was shown to be potentially promising.The RTOG 93-05 investigated the use of radiosurgery upfront in newly diagnosed GBM patients.This trial failed to show a benefit of this approach [9] and radiosurgery subsequently fell out of favor in this setting.However, we began utilizing Gamma Knife radiosurgery in place of conventional fractionated EBRT in elderly patients at upfront diagnosis to minimize the treatment time and effort required during the patient's remaining short life span.At that time median survival for elderly patients was only a handful of months.We agree with the literature that EBRT as part of aggressive treatment of GBM in elderly patients is warranted [12].This alternative approach offers a truly hypofractionated schema using a single fraction regimen.This study confirms that our results are equivalent to those reported in the literature with EBRT (with short vs long course) for elderly patients with GBM [11].
A final consideration in the treatment of elderly patients with GBM is the cognitive implications of the RT modalities.
Conclusions
Aggressive treatment of the elderly is justified given the difference in the outcome of patients who receive Gamma Knife, surgery and chemotherapy versus no surgery and that addition of chemotherapy demonstrates increased OS.Furthermore, the results of patient cohort using radiosurgery alone are better than using conventional radiation therapy.If in fact further studies reveal that single treatment using radiosurgery is equivalent to five to six week fractionated paradigms, this saves the patients a significant amount of time spent in therapy, avoids toxicity, and allows for further therapy if necessary to distal sites with radiosurgery.
FIGURE 1 : 7 N
FIGURE 1: OS for all patients was 10.9 months We prospectively treated 167 patients at least 65 years old with a clinical diagnosis of glioblastoma, all of whom had been treated solely at the Miami Neuroscience Center during the period of December 1993 to September 2010.Patient selection was based on the knowledge that patients had not undergone whole brain radiation or fractionated radiotherapy.Out of this group, we excluded 127 patients whom did not meet the aforementioned criteria or where pathology was not confirmed or available.This left 40 patients, 65 years of age and older with a histologically proven diagnosis of GBM and no previous radiation therapy.No other inclusion or exclusion criteria were applied.
TABLE 2 : Statistical analysis of survival diagnosis
Abbreviations: Sx, surgery (greater than biopsy), GK, Gamma Knife ,Chemo, Chemotherapy , KPS, Karnofsky Performance Scale, GK, Gamma Knife [17]g, et al. demonstrated that patients with brain metastasis treated with stereotactic radiosurgery (SRS), plus whole brain radiation therapy (WBRT), were at a greater risk of a significant decline in learning and memory function by four months compared with the group that received SRS alone.He goes on to recommend SRS and close clinical monitoring as the preferred treatment strategy to better preserve learning and memory in patients with newly diagnosed brain metastases[17].Although the pathology is different, our study results are in agreement with Chang, et al.Of the patients that entered treatment with a KPS score of 80 or better, their median KPS improved from an 80 to a 90.For those patients entering treatment with a KPS score of 70 or worse, they maintained a median KPS value of 70 before and after treatment.Additionally, our data supports Scott, et al. in that patients with an initial KPS of 80 or better had a median OS of 11.79 months as compared to those with initial KPS of 70 or below, median OS of 8.25 months.Our study supports the use of GK radiosurgery as a way to preserve performance status in elderly patients with GBM. | 2017-10-11T05:55:35.203Z | 2011-09-20T00:00:00.000 | {
"year": 2011,
"sha1": "3dfffa4d14e1b3e56405f37a25027ac0afdab0b7",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/38-radiosurgery-with-chemotherapy-as-an-alternative-to-rt-for-glioblastoma-multiforme-patients-65-years-old-or-older-a-prospective-review-of-40-patients.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3dfffa4d14e1b3e56405f37a25027ac0afdab0b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
182925568 | pes2o/s2orc | v3-fos-license | Prevalence of dirofilariasis in cats in the Kars province, Turkey1)
Dirofilariasis is a zoonotic vector-borne disease, threatening for public health and showing an increased distribution worldwide (1). The highest number of cases are observed particularly in tropical and subtropical regions (4). The prevalence of dirofilariasis tends to be rather high in river valleys and humid regions. This is due to such locations being favourable for the vectors of the disease (26). The two main causative agents of the disease are Dirofilaria immitis and Dirofilaria repens. Of these agents, D. immitis causes heartworm disease, and D. repens causes subcutaneous filariosis in dogs and cats. These agents infect both wild and domestic carnivores in Europe, Asia and Africa (14). Both D. immitis and D. repens are transmitted by mosquitoes, and those belonging to the genera Culex, Aedes and Anopheles are the main vectors of the disease (28, 30). The length of the adult female parasites may reach 17 cm in D. repens and may be up to 30 cm in D. immitis. The life cycles of both Dirofilaria species are similar. However, D. repens differs in that the adult parasites of this species are mainly found in subcutaneous tissues (13). The definitive (final) hosts of D. repens are dogs and other carnivores. Humans serve as an incidental host for this parasite. In regions where the disease is endemic, the prevalence in cats is lower than that in dogs (3). The prevalence of D. immitis in cats is affected by several factors. The population density of the vectors, the mosquito species found in a particular region, and dirofilariasis having an endemic course in dogs in a region are risk factors (19). Generally, cats have a natural resistance to infection with D. immitis (6, 12). Although cats infected with this species develop pathological changes in the respiratory system, the disease is mostly asymptomatic in these animals (12). In infected cats, clinical signs may be present such as, acute death, intermittent dyspnoea, chronic cough, and vomiting. The migration of the parasitic larvae to the brain may cause neurological symptoms such as blindness, syncope, collapse and vestibular signs (20). Dirofilariasis has been reported in dogs in several studies from Turkey. To the authors’ knowledge there is no report in the literature available on this zoonotic disease in cats. This study was aimed at determining the seroprevalence of dirofilariasis in indoor cats with outdoor access in the Kars region. 1) This research was supported by Kafkas University Scientific Research Projects Commission. Prevalence of dirofilariasis in cats in the Kars province, Turkey1)
Material and methods
This study was conducted after receiving approval from Kafkas University Animal Experiments Local Ethics Committee (KAÜ-HADYEK/ 2016-074).
Study area. The study was carried out in the Kars province, which is located in northeast Anatolia and has a cold climate (11). The centre of the Kars province and the districts Sarıkamış, Arpaçay and Selim were selected as the sampling locations. The average temperatures of Kars center, Arpaçay, Selim and Sarıkamış for the last three years (2016-2018) are 6.21°C, 6.66°C, 5.98°C and 5.07°C, respectively. The average rainfall was 37.83 kg/m 2 , 17.16 kg/m 2 , 34.48 kg/m 2 and 33.38 kg/m 2 respectively. The average relative humidity was 63.65%, 68.08%, 70.22% and 68.25% (18).
Animals. A total of 150 cats of varying age, including 71 males and 79 females, indoor cats with outdoor access, constituted the study material.
Collection of blood samples. Of the blood samples collected, 78 belonged to the centre of the Kars province, 30 to Sarıkamış, 24 to Arpaçay, and 18 to Selim. Of the animals included in the study, 50 were 1-2 years old, 56 were 3-4 years old, and 44 were aged 5 years or older. Five-mL blood samples were taken from the radial vein of each animal for analyses. The blood samples were centrifuged at 3000 rpm for 10 min for the extraction of sera. The serum samples were transferred into eppendorf tubes and stored at -20°C until being analysed. Measurements were performed using a commercial Enzyme-Linked Immunosorbent Assay (ELISA) kit (MyBiosource ® ).
Statistical analysis. The results were performed by using chi-square analysis in SPSS 20.0 statistical software package. Values of P < 0.05 were considered to be statistically significant.
Results and discussion
The results of the present study demonstrated that dirofilariasis seropositivity was 29.5% (23/78) in the centre of the Kars province, 16.7% (5/30) in Sarıkamış, 5.6% (1/18) in Selim, and 8.3% (2/24) in Arpaçay. The collective evaluation of all sampling localities revealed a mean seropositivity rate of 20.7% (31/150) in cats in the Kars region (Tab. 1). The highest seropositivity rate was detected in the blood samples belonging to the centre of the Kars province. Analyses revealed that the differences observed between the sampling localities were statistically significant (P < 0.05).
Analysis results showed that out of the 79 female cats 18 (22.8%) and out of the 71 male cats 13 (18.3%) were seropositive for dirofilariasis (Tab. 2). Although the seropositivity rate of the female cats was higher than that of the male cats, the difference observed between the two sexes was statistically insignificant (P > 0.05).
Dirofilariasis is caused by filarial nematodes belonging to the genus Dirofilaria (16). In regions where dirofilariasis is endemic in dogs, cats are also considered to be under risk of infection and most possibly carry the disease (10). In the present study, which was aimed at determining the seroprevalence of dirofilariasis in indoor cats with outdoor access in the Kars region, the prevalence of the disease was ascertained as 20.7%. Although the prevalence of dirofilariasis in cats has not been investigated in Turkey before, studies are available indicating different prevalences for the disease in dogs. Accordingly, in serological research carried out in Sivas (2), Thrace (8), Elazığ (5), Kars-Iğdır (29), Iğdır (27) and Van (15), the prevalence of D. immitis in dogs has been reported to range between 2.9% and 40%.
The prevalence of dirofilariosis detected in cats in the present study was observed to fall within the prevalence range reported in the above mentioned studies carried out on dogs in Turkey for D. immitis, but was lower than the rates reported for dogs in Kars-Iğdır (29) and Iğdır (27). The differences observed between the results of the present study and the indicated inves- tigations were attributed to differences in the species investigated, the regions, and the number of samples collected.
Tab. 1. Seropositivity rates for dirofilariasis in cats in different regions
In research carried out on cats with PCR, the prevalence of D. repens was reported as 0.7% in Poland (3). Also in research carried out on dogs with PCR the prevalence of D. repens was reported as 38.3% (3) and 25.8% (9). Furthermore, the prevalence of D. immitis antibody-positive in cats has been reported to range between 0.54% and 24.9% (7,21,22,25).
Montoya-Alonso et al. (24) reported that sex is influential on the prevalence of D. immitis in cats (P < 0.001) and in this study the prevalence of the disease is higher in male cats than females. On the other hand, some other reports suggest that sex has no significance for the prevalence of the disease, yet nevertheless indicate a higher prevalence in male cats (17,23). The present study also demonstrated that sex has no significance for the prevalence of dirofilariasis in cats. In contrast, Magi et al. (22) reported that the prevalence of D. immitis was insignificant and higher in female cats in comparison to male cats. Our study is in agreement with results of the Magi et al. (22).
There are several reports in the literature indicating that age has no statistically significant effect on the prevalence of disease in cats (23,24) and dogs (2,29). Similar to these reports, the present study revealed no relation between the age of the animals and the prevalence of the disease. On the other hand, the prevalence of the D. immitis has been reported to significantly increase with advanced age in cats (17) and in dogs (5). Differently, although statistically insignificant, Magi et al. (22) reported a higher prevalence in 1 to 2 year old cats, compared to older cats, which is in agreement with the results of the our study.
In conclusion, the present study is important in that it is the first epidemiological report on the prevalence of dirofilariasis in cats from the Kars province and Turkey. The results of the present study have revealed a dirofilariasis seroprevalence of 20.7% in cats in the study area. In view of cats being under risk of infection in regions, where the disease is endemic in dogs, it was concluded that similar studies need to be conducted in cats in other regions of Turkey. Furthermore, the highest seropositivity rate having been detected in the centre of the Kars province, it was also concluded that cats in the city centre were under particularly high risk of infection, which requires prophylactic measures. | 2019-06-07T21:33:43.848Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "80014b2f1488ae07e077b34f093c3e9ce14fa2c5",
"oa_license": null,
"oa_url": "http://www.medycynawet.edu.pl/images/stories/pdf/pdf2019/092019/2019096241.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fe44c461e23aca7c884db182dcacab0c9c8d3709",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
247330568 | pes2o/s2orc | v3-fos-license | Digital Twin perspective of Fourth Industrial and Healthcare Revolution
Digital Twin (DT) is bringing revolution to our lives by a digital representation of the physical system. DT is the creation of the joint usage of various technologies like Cyber-Physical System (CPS), Internet of Things (IoT), Big Data, Edge Computing (EC), Artificial Intelligence (AI), and Machine Learning (ML), etc. DTs are established to optimize a wide range of applications of industry, healthcare, smart cities, smart homes, etc. It is still in its early development stages. This paper fills the gaps by combining the extensive information on technologies utilized in the creation of DT in industry and healthcare. The paper focuses on studying the characteristics of DT, communication technologies and tools utilized in the creation of DT models, reference models, standards, and the researcher’s recent work in smart manufacturing and healthcare. Challenges and open issues that need attention are also discussed.
I. INTRODUCTION
The fourth industrial revolution has changed the world completely as it directed the world into an age of automation and digitization. It was an era of digital transformation that took the industries, healthcare, communication, homes, and offices by storm. Various technologies help in achieving our daily tasks. The industrial revolution took decades as the First Industrial Revolution was started in 1850 and the Fourth Industrial Revolution term was coined by Klaus Schwab in 2015. First utilized the power of water and steam to mechanize production. The Second look toward electrical power for mass production. Third, made use of electronics and information technologies for automated production. The fourth revolution is the fusion of different technologies that are blurring the lines between the physical, digital, and biological spheres. Industry 4.0 (I4.0), is also known as Fourth Industrial Revolution, is a hit rather than hype. At heart, Industry 4.0 is the trend towards automation and data exchange. It is the combination of the IoT [1,2], big data [3,4], CPS [5], and Artificial Intelligence (AI) [6]. These technologies have changed our life's and with advancement in technology, they will keep on changing it. Multiple authors have discussed the origin, impact, examples, and future trends of Industry 4.0 in different publications [7][8][9][10]. The term, Industry 4.0, emerged in Germany and proposed a complete digital transformation of the product and its manufacturing [11]. It was labeled as smart manufacturing in the United States [12][13][14][15][16]. The smart factories represent the future of fully automated and connected systems, mainly operating without the human presence by data acquisition, processing, and performing necessary actions on it [8]. In [17], a smart factory research model is presented with various technologies and attributes. It is an illustration created from the work of authors [18][19][20][21][22][23][24]. Here, Fig. 1. represents a more compact and simpler version of the smart factory research model. It is a combination of various technologies like IoT, CPS, DT, big data, edge, fog, and cloud computation to create a smart factory environment. 1. Bandwidth has a direct impact on the numbers of users supported, and the ability to exchange a large amount of data (i.e. for predictive maintenance) [27]. 2. Scalability provides smooth movement of devices/users in and out of a network without negative effects on the Quality of Service (QoS) [28] and functionality. The focus will be to provide a flexible network [29]. 3. Cyber security is compulsory for industry 4.0 scenarios. It is important to protect people, industries, data, and assets from attackers [30]. 4. Reliability will allow the systems in an IoT scenario to work properly in any condition [31]. It will help a long way in increasing productivity.
In the 21st century, Information Technology (IT) technologies such as IoT, cloud computing, big data, and AI, make it realistic to covert physical and virtual worlds. The cyber-physical integration [32] toward the digitalization of industries. Digital Twin (DT) is the crown jewel of Industry 4.0. This technology represents the physical system in the digital world with all its features and properties. The DT of any system is possible when multiple technologies like IoT, AI, ML, CPS, and big data work together. With the help of these technologies and real-time sensor data from the system, the DT of the system can perform numerous simulations, predictions, analyses in a safe environment. Despite the increasing research on Industry 4.0, the research remains scattered. The authors in [13,33] gave importance to the structure and condense the vast knowledge of multiple fields of Industry 4.0. There is a difference between this work and previous literature reviews. The authors of [34], inspected the current state of the literature on Industry 4.0 whereas, [35] looked into the managerial literature only. The industry 4.0 technologies and their effects are discussed in [36][37][38][39]. Character or design principles were focused on by [40,41]; human resource management and organization implication was discussed in [42,43]. Industry 4.0 in terms of operation and supply chain management was investigated by [44]. Literature on implementation of Industry 4.0 by [45][46][47] included the mixture of old technologies with new like Enterprise Resource Planning (ERP), Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Electronic Data Interchange (EDI). Industry 4.0 is a combination of different technologies, so it is not possible to focus research on a single stream. Many of the researchers work on different aspects of Industry 4.0 such as technologies, current state, future trends, application scenarios, and open research areas [37,48,49]. Research surveys like [50] provide a detailed insight into DT literature, lifecycle, tools of various aspects of simulating digital models along with comparison. The authors of [51] provide a detailed overview of DT definition, characteristics, open challenges, and application cases of smart manufacturing and healthcare. In [52], a systematic overview of multiple industry 4.0 technologies and tools, and their utilization in numerous applications is elaborated but with no comparisons on numerous communication technologies and standards. The mentioned papers are in no aspect weak in terms of a literature review or knowledge, but they are missing insight into standards such as Reference Architecture Model Industrie 4.0 (RAMI 4.0), and edge-fog-cloud computing. There is a need to have literature in terms of IoT technologies comparison, DT simulation and modeling tools, big data analytics, edge-fogcloud computing, open challenges, and standards for the creation of DT models along with an overview of research performed in applications of manufacturing and healthcare. This is the motivation for writing this literature review.
The paper is organized as follows, Section II explains the reference model of Industry 4.0, Section III discusses suitable communication technologies and their comparison based on various characteristics like range, data rates, power consumption, and the number of users supported. Section IV presents data analysis, management, and AI-ML. Section V details edge-fog-cloud computation in industry 4.0 while Section VI explores the latest concept of DT in terms of benefits, application areas, tools for the creation of digital models, data acquisition, and open research issues. Section VII shares the existing research performed in smart manufacturing and healthcare. Section VIII delineates some of the open research issues in Industry 4.0.
II. Reference Architecture Model of Industry 4.0
The reference model for Industry 4.0 was the result of joint efforts of multiple German associations and institutions in 2015. Fig. 2 represents the RAMI 4.0. Industry 4.0 applications are implemented with assistance from such a The term IoT is mentioned in literature by many researchers. The purpose of IoT is to provide a connection between the internet and things. "Things" refers to anything like an object or a person [64]. The "Internet" refers to the network of the networks. Standard Internet Protocol (TCP/IP) is utilized worldwide to provide users with interconnected computer networks. But TCP/IP is not sufficient for most distributed applications due to the constraints of limited number of available addresses, overhead, and energy consumption. IoT has a wide range of applications within the areas such as transportation, healthcare, or utilities [65]. IoT networks can be in various forms such as Thing-to-Human, Human-to-Human, and Thing-to-Thing connected to the internet. Individually identified objects also exchange information inside this network [66,67]. IoT is described by Sezer et al. [65] as: "IoT allows people and things to be connected anytime, anyplace, with anything and anyone, ideally using any path/network and any service". In the words of Bortolini et al. [68], IoT is a global presence to provide connectivity between various objects and things networking and cooperating. IoT enables digitization of any physical system. Digital information is useful in various ways. In terms of industry, entire production lines such as machinery and related resources can be the "things" managed and virtualized by Industry 4.0 [69,70]. In general, digital data can be utilized to modify system design, optimize production lines, increase efficiency, and be cost-effective. Through the use of sensor data and a virtual replica of the physical world [69]. IoT can work both in heterogeneous and decentralized environments [71]. In other words, we can make use of IoT in industries, smart homes, 2-Interoperability: The ability of the system to communicate with various devices to achieve the same goal [83]. 3-Extensibility: The ability to easily add something to the system. To enable a software to handle more functionalities or interface without increasing the size of the system. 4-Modularity: The components of a system that can be separated and replaced or recombined to provide flexibility and variety in use.
For example, IoT application in manufacturing industrial automation application [84]. Another example can be connecting the collected sensor data in a factory with IoT platforms to increase the efficiency of production with big data analysis [66]. An overview of the wireless technologies for IoT is provided in Fig. 4. Various wireless technologies can fulfill IoT requirements in an environment, few of which are labeled in this figure.
IV. BIG DATA AND AI-ML
Increasing growth in data from IoT sources and information services is driving the industries, hospitals, smart homes, and smart cities to create more tools and models to handle big data. Big data is characterized by volume, variety, value, veracity, and velocity. These characteristics are named "The 5Vs" [89,90]. This data needs to be analyzed, stored, and secured to improve system efficiency, scalability, and security. Implementing big data platforms requires significant knowledge and expertise in data science and IT domain due to its complex infrastructure and programming models. Numerous tools are available in the market for organizations, but they are less popular due to their complexities. A trend in this domain is to create a level of abstraction to utilize popular data processing platforms. Apache Beam allows its data flow programming model to be utilized for multiple runners like Apache Spark and Apache Flink, Machine learning algorithms are applied on data streams in Apache SAMOA, whereas applications created on SAMOA can be executed on Apache Samza, Apache Strom, and Apache S4. The 5Vs of big data have provided a doorway to a new realm of solutions. Multiple frameworks [91][92][93][94] have been designed to utilize big data for effective analytics in various fields and applications. To overcome the challenges of big data in industry 4.0 or any other application, AI-ML can be utilized in combination with big data. The AI tries to digitally replicate three human cognitive skills: learning, self-correction, and reasoning. Digital learning is a process of converting previous data into actionable information. Digital reasoning is to select the best option to reach the desired goal, whereas selfcorrection is a repetitive process of reasoning and learning.
All models follow such a build for a smart system, which performs a task that will normally require human intelligence. Various AI methods are utilized such as machine learning, data mining, deep learning, rule-based algorithms, logic-based algorithms, and knowledge-based algorithms. There is a general focus on ML and deep learning in AI approaches. This conjunction of technologies like IoT, AI-ML, and big data helps visualize the concept of DT. A representation of the overall relationship between multiple technologies with DT is shown in Fig. 5.
The amalgamation of technologies leads to very interesting applications, especially in industries, such as indoor asset tracking [95], real-time monitoring of physical systems [96], manufacturing [97], and outdoor asset tracking [98]. The IoT devices allow for real-time data acquisition, which is critical for the creation of DT models of the physical assets [99], achieve maintenance [100] and optimization [101] by linking the physical system with the digital replica. There is a deep connection between data and IoT devices, thus big data analytics has a major role in developing a successful DT model.
FIGURE. 5. DT relationship with IoT, Big Data, and AI-ML
However, managing such an enormous amount of data in the industrial and DT domain requires advanced architecture, techniques, tools, frameworks, and algorithms. Authors of [102,103] have presented a big data processing framework for industries and maintenance in a DT situation. Cloud computation is one of the platforms that can be used to process and analyze big data [104,105]. It is important to implement applicable AI-ML techniques or algorithms to make the DT models more intelligent. In the end, DT will be able to perform tasks such as: 1-Prediction (e.g. maintenance in industry systems and health care status) [100]. 2-Optimization by process control, planning, assembly line, and scheduler [106,107]. 3-Detecting best resource allocation, safety detection, best process strategy, and fault detection [108]. 4-Dynamic decision-making based on digital twin data/physical sensor data.
Big data, AI, ML, and IoT have significant importance in industry 4.0. Industries utilize the concept in the same way as in other fields, by processing a large amount of data collected from smart sensors through the cloud or IIoT platforms to improve the overall efficiency of the operations. Finding correlations is one of the major tasks but it is not the only job. More than discovering patterns and correlations, the use of computational intelligence tools (AI, ML, and Big Data) will bring real results when it helps to find the causal nexus throughout analyzed processes. Smart healthcare applications use these concepts in applications of healthcare monitoring, drug discovery, intensive care, diagnosis of diseases, and training of healthcare professionals [109]. VOLUME XX, 2017 7
V. Edge Computing
With fast-growing IoT devices and increased data size, it is necessary to reduce the load of computation at the operating station or on the cloud. Edge Computing (EC) allows the network to perform computation or data processing at the edge. The integration of IoT, mobile services, and applications in complex scenarios like smart cities and industry 4.0 has created new challenges for Cloud Computation (CC) [110,111]. A typical CC performs storage and computation of data in a centralized system. EC, however, performs data processing at the extreme (edges) rather than centralized or distributed nodes (core). The term EC can be defined as computation performed at the ends of the network. EC can meet the requirements of battery life, latency, response time, data protection, and privacy [112,113]. With the various network operations that EC can perform, the edge must be designed efficiently to ensure reliability, privacy, and security. EC can provide significant support not only in industries but also in other areas such as smart homes [114], smart cities [115], smart logistics, and environment monitoring. In healthcare domain, EC can improve the efficiency by reducing data circulation and providing faster data processing [116]. Sensors and wearable devices are a way to actively monitor patients, at home or in care homes, who are suffering diseases or have a high risk of heart attack [117]. More efficient methods are required to process data at the edge of the network due to a large amount of data being produced. Previous methods of cloudlet [118], data center [119], and fog computing [120] can reduce the load of computing on the cloud but data processing at the cloud is not efficient when data can be produced at the edge of the network. The authors of [112] stated some of the reasons for utilizing edge computing. The authors mention that more services are diverted from cloud to edge of the network because data processing at the edge can guarantee shorter response time and better reliability. Edge computing will save bandwidth if a large amount of data is processed at the edge rather than at the cloud. The burgeoning growth of IoT and mobile devices has changed the purpose of edge devices from data consumer to data producer. Fig. 6. represents the infrastructure of edge, fog, and cloud.
IoT has an important role in EC [110]. The authors of [121] provide details on Mobile Edge Computing (MEC), communication technologies, and comparison with Mobile Cloud Computation (MCC). Communication is necessary to provide interconnectivity between edge devices and transfer data from the edge to the cloud if extensive computation is required. The IoT technologies, discussed in Section III, can be implemented based on the application requirement. At the present, several research directions are aimed at establishing standards for the development of architectures, concepts, or processes implemented in EC solutions.
Various independent organizations and entities have proposed different specifications, i.e., security, communication protocols, data protection, and reference architectures specifically for industrial environments. The authors of [122] presented a tiered architecture with a modular approach that helps to manage the complex solution for industries as well as smart cities, healthcare, and smart energy. The major contribution of the architecture exists in security and privacy provided by blockchain technologies. AI and ML algorithms in combination with EC will play an important role in the advancements of many applications i.e., healthcare and industries. Edge Machine Learning (Edge ML) is a new concept in which smart devices can process locally with the help of a machine and deep learning algorithms. Edge devices can still send data to the cloud, but the ability to process the data locally provides screening of the data before sending it to the cloud while also allowing for real-time data processing and response. In-memory computing and ML processors are inventions for the embedded chips to be utilized in the future. Inmemory chips provide high performance by storing data in RAM and performing parallel processing. ML processors are utilized for edge learning tasks. Floating-point Operations Per Seconds (FLOPS) is considered for measuring computing performance. It is the number of floating-point calculations a computing resource could perform per second, the higher the FLOPS, the better computing performance.
FIGURE. 6. Edge-fog cloud infrastructure
Authors of [123] have provided a comparison of multiple ML processors such as Field Programmable Gate Array (FPGA), Graphical Processing Unit (GPU), Microcontroller Unit (MCU), and microcomputer, etc. A detailed literature review of EC in Smart Grid (SG) has also been provided. Merging deep learning and EC is predicted to bring new possibilities to both interdisciplinary research and industrial applications. Deep learning can provide greater data processing capability and innovation in novel applications such as autonomous driving and video surveillance [124] etc. EC alone and in combination with various technologies is quite effective in industries as well. Merging blockchain and EC paradigms can be effective in overcoming security and scalability issues. In [125], authors implemented blockchain and EC paradigms in IIoT/IoT critical infrastructure to overcome security and scalability issues. They also provided a survey and discussed open research areas for security and scalability. The authors of [126] had given a very informative insight into the industrial internet revolution, where industrial edge computing is implemented to facilitate fast connectivity, data optimization, and real-time control. This also has the benefits of empowering smart applications, ensuring better security and protecting user privacy. Edge Computing Nodes (ECNs) are utilized by industrial edge computing. It allows for bridging the gap between the physical and digital worlds by substituting as smart gateways for assets, systems, and services. IEEE P2805 standards, are also discussed, which aim to solve problems of selfmanagement, data acquisition, and ML through cloud-edge collaboration on ECNs.
The relationship between EC and industry 4.0 is considered in the form of on-site data centers. We can summarize the relationship by the benefits EC provides in industry 4.0 e.g., faster data processing, quicker decision making, increase productivity at all levels of management, and reliable big data infrastructure to name a few. EC can itself be a pillar in industry or a replacement for cloud computing in Industry 4.0 if the two function in tandem with each other.
VI. DIGITAL TWIN
Digital Twin, which incorporates Big Data, AI, Machine Learning (ML), and IoT, is a key technology in Industry 4.0. Authors of [127] and [128] have reviewed different definitions of DT. At present, Grieves and NASA gave the two definitions that are globally accepted. NASA define DT for a space vehicle as: "A Digital Twin is an integrated multi-physics, multiscale, probabilistic simulation of an asbuilt vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin" [129].
Multiple companies utilize the concept of DT. A company like Chevron saves millions of dollars in maintenance costs by implementing DT for its oil refineries and fields [130]. Siemens utilizes the concept of DT to minimize failures, reduce time to market and create new business directions [131][132][133]. Fig. 7. represents the relationship between physical and digital twins in manufacturing applications.
FIGURE. 7. DT model of manufacturing application
DT was looked to as the next generation of simulation tool [134] but Tao and Zhang, worked towards creating a way to achieve a point of convergence of digital and physical systems [135]. The DT is a way to provide a better humanmachine connection. It is bi-directional communication between the digital model and the physical world. The simulation model utilizes real-time sensor data of the selected parameters to replicate the performance and working of the system under consideration [136]. Any digital representation of physical systems helps in predictive analysis, monitoring health, business models, avoiding downtime/delays, and improving product design with lower cost. In [137], the importance and challenges of DT in personal healthcare are discussed. Bagaria et al. [138] summarized the technologies and application requirements to implement DT for personal healthcare. The author of [139] mentioned that DT provides a novel direction to represent a physical system in the digital model concerning its position, shape, status, gesture, and motion. By utilizing real-time sensor data along with AI, machine learning, and big data analytics, DT can be used for diagnostics, monitoring, prognostics, and optimization [140,141]. This way, DT can make a wide range of operations in decision-making possible. Once the DT model of the facilities, environment, and people is prepared, it can be used to train users, operators, maintenance workers, and service providers. DT is a fruitful method to improve industries or companies productivity, and efficiency [142].
The applications of DT, according to the product lifecycle, can be linked to the design, production, and use phases as shown in Fig. 8. DT, at the product design stage, enables designers to visualize, digitize, and materialize the elusive concepts of systems (ship, aircraft, and factory) that have multiple components and implicit coupling [143,144]. The quality of the designs can be compared, evaluated, and validated with DT rather than building expensive physical prototypes [144].
FIGURE. 8. Configuration and application of digital twin
The digital representation of the production and the usage scenarios can help explore all the possibilities and variations of the manufacturability and functionality of the entities to create an optimal design. This way the department of design and production can work together to identify the faults, quality defects, and provide better solutions [145]. Authors of [143] and [146] demonstrated that DT could simulate the whole factory design process ranging from the layout and, material handling to equipment configuration. Zhang et al. [147] worked on a simulation-based method for plant design and production planning. This approach can be implemented to create DT models of the plant. At the production stage, a DT can help optimize production management through the simulation, verification, and confirmation of the process planning and production scheduling. DT can help with optimal placement of workers, equipment, on-site resources, and work-inprocess [148]. In terms of control and execution, DT keeps track of all the activities occurring in the physical world to forecast, and enhance the control approach [149,150], and align the process with planning [151]. A DT model of a construction site can help detect and predict potential issues before they occur in the real world. The DT can also help to optimize planning, processing, and resource allocation. [135,152]. Further, the authors of [153] proposed architecture for utilizing cloud-based ubiquitous robotic systems for smart manufacturing of the customized product. They also provided implementation procedures for the creation of cloud-based ubiquitous robotic systems. Wang et al. utilized Holon, which consists of a logical part and a physical part, to mimic the cyber and physical entities of CPS [154]. In the end, at the service stage, the physical systems behave differently in various usage scenarios for different purposes, DT is utilized to simulate the usage scenarios. In these circumstances, DT can provide new methods of diagnosis and prognosis of damage location [155], remaining life [156], and wear [157], reducing costs and downtime [158]. Iterative experimentation can be carried out with the help of the DT model to generate the best maintenance solution [136]. For example, the performance of aircraft engines in terms of pressure tolerance [140] and wear coefficient, DT driven Prognostics and Health Management (PHM) for wind turbines [159]. With the help of simulation tools and virtual reality tools, DT can allow operators to understand complex physical systems and processes. The creation of DT is a long-term process to orientate, operate and optimize and for the multiple software tools can be used in synchronization. Some of the research issues in the simulation community are (1) the need for big data analytics along with better sensor technology for data collection, data processing, and data analytics, (2) real-time synchronization between the physical system and the digital model to reflect the current status, (3) suitable methods for model generations, verification, validation, and uncertainty quantification.
The author of [50] provided practitioners and researchers with a detailed overview of key technologies and tools for the implementation of DT. The extensive details provided in this paper are very beneficial for all the researchers who are looking to understand different tools and platforms for modeling, connectivity, data management, diagnosis, optimization, cognizing and control of the physical world. Tools for DT services applications, modeling, and connectivity are represented in Table III to Table V. A single tool can be utilized for multiple tasks based on its capabilities, functionality, and performance.
In Table III, different tools for various DT service applications, such as optimization service tools, platform service tools, simulation tools, and diagnostics and prognosis service tools are shown. The diagnostic and prognosis service tools are very useful for predictive maintenance tactics for the system and reduce system downtime. This is achieved through analyzing the historic and real-time data of the twin. ANSYS simulation platform allows customers to design their systems to analyze their performance. This provides them with the opportunity for design changes and troubleshooting. MATLAB can be used to implement data-driven techniques (such as deep learning, neural networks, machine learning, and system identification) for predictive analysis, comparisons, and determining remaining useful life to inform operators to replace or service equipment. Similar tools for diagnostic and prognosis are presented in Table III. Optimization service tools provide extensive what-if simulations to evaluate the performance and need for adjustments to the current system set-points. This allows the operators to optimize the system or control it during operations to lessen the risk, reduce energy consumption and cost, and increase system efficiency. Siemens provides Plant Simulation software to optimize the factory layout and production line scheduling [147]. Simulink is an add-on product to MATLAB. Simulink is more interactive and graphical to the user as compared to the code-based approach of MATLAB. Similar tools of optimization service are presented in Table III.
Simulation tools not only provide diagnostics, predictive analysis, and determine the best approach of maintenance, but also provide next-generation system design based on historic and sensor data. Designing a CNC machine tool can be taken as an example. Without accurate Finite Element Analysis (FEA) simulation analysis of the design, the CNC machine tool will fail in vibration. Extra material can be added to strengthen the machine to reduce vibrations. However, this will increase the cost due to overdesigning of the tool. FEA in ANSYS software provides the best solutions taking into account the performance requirements and, cost limitations, and can fulfil the lean design requirements of the CNC machine tool [160]. Siemens NX software is a commanding and flexible tool that can enable companies to understand and implement DT to its fullest. NS software provides futuristic design, implementation, and solutions along with handling all aspects of the system from design engineering to manufacturing. Similar simulation tools are presented in Table III.
Service platform tools provide the ability to integrate technologies such as IoT, big data, and AI. The PTC ThingWorx platform allows the operator to connect the DT model with the system in operation, to represent and analyze sensor data. PTC ThingWorx platform allows multiple actions of data acquisition, industrial protocol conversion, big data analysis, device management, and other services. PTC ThingWorx allowed HIROTEC, a premier automation manufacturing equipment, and part supplier, to recognize the connection between CNC machine operation data and ERP data. Other service platform tools are presented in Table III.
Digital models replicate the physical systems based on their physical geometries, behavior, properties, and rules. The tools for DT modeling include geometry modeling, physical modeling, behavior modeling, and rule modeling. These are presented in Table IV. VOLUME XX, 2017 11 The geometric modeling tools provide details of the shape, size, position, and assembly association of systems. Based on this, geometric modeling tools perform structural analysis and production planning. An example of such a tool is 3D Max. It allows animation, 3D modeling, visualization, and rendering. It is used to describe a detailed environment and is widely used in games, multimedia production and architectural design. More examples of such tools are presented in Table IV.
Rule modeling improves the service performance by modeling the rules, logic, and laws of physical behavior. HPE EL20 edge computing system, with ML ability by PTC's ThingWorx, can monitor the normal state of a pump when it is running. With the help of learning rules, DT can detect abnormal operations, predict future trends, and detect abnormal patterns. Similar tools are presented in Table IV.
The behavior modeling tools are utilized to develop a model which responds to external drivers and disturbance factors, to improve its simulation service performance. An example of the motion control system of CNC machine tool design is the soft PLC platform CoDeSys. The motion control system utilizes socket communication to transfer information with the multi-domain model of the 3-axis CNC machine tool developed in MWorks. In this manner, the motion control of the 1-axis and 3-axis interpolation of the CNC machine tool can be realized. The multi-domain model can respond to the external drive. More examples of such tools are presented in Table IV.
The physical modeling tools are used to build a physical model to analyze the physical states of physical entities. The physical model is developed by endowing the physical characteristics of the physical entities into geometric models. An example of such a tool is the FEA software by ANSYS. It utilizes the sensor data to represent the real-time boundary conditions for the integrated wear coefficient and geometric models or performance degradation in the digital model [161]. Also, Simulink has been used as physicsbased modeling. Simulink contains a range of models of electrical components and mechanical. Similar tools are mentioned in Table IV.
The concept of DT is to connect the physical and digital world and break the shackles between physical and virtual realities. A wide variety of tools is required for connectivity between the physical and virtual worlds, as well as to connect different parts within a DT model. The connection within the DT model is the interaction, communication, and exchange of information between the system, service, data center, and digital model. PTC ThingWorx can act as a gateway between sensors and their respect digital model part to connect multiple smart devices to the IoT network.
MindSphere is an example of a cloud-based tool from Siemens. It allows connection between products, plants, systems, and machines. MindSphere has the capability of advanced data analytics to allow the wealth of data use. Another example is of Jasper Control Center from Cisco Jasper, which can manage connected devices much better using NB-IoT technology. Jasper control center can continuously monitor the network conditions, IoT service status, and device behavior to ensure high service reliability through real-time diagnostics and proactive monitoring of the connection. Azure IoT Hub by Microsoft allowed Rolls Royce to create a DT of the engine and perform data analysis based on machine learning to detect multiple anomalies of the engine and prescribe timely solutions [162]. The connection is necessary for transfer of information to help develop problem diagnostics and troubleshooting, thereby, optimizing the performance of physical entities. It can also assist in developing optimized maintenance strategies based on every system's unique characteristics. Numerous tools are utilized in various ways in DT applications, e.g., PTC's ThingWorx can be utilized for platform services as well as diagnosis and prognosis services but cannot be used for simulation and optimization. Tools, such as PTC's ThingWorx, Foxconn's Beacon, ANSYS, Siemens' MindSphere, and Dassault's 3D Experience, etc. are presented in Table VI. The addition of a single tool, MATLAB/Simulink, is made in Table VI based on the information provided by authors of [50].
VII. APPLICATION AREAS
This section discusses the applications of multiple industry 4.0 technologies such as IoT, CPS, Big data, AI-ML, and robotics. They are grouped into two domains: smart manufacturing and healthcare.
A. DT in SMART MANUFACTURING
CPS acquires data from the environment, processes it, and makes an accurate decision. These systems are referred to as 'smart machines'. The physical and cyber layers combine to form CPS. These CPS are characterized by availability, performance, and reliability. They have a remarkable impact not only on industrial systems but also in our day to day lives.
Smart factories, which are centered on Cyber-Physical Production Systems (CPPSs), rely on these smart machines. At present, these smart machines are not far from the final solution to modern factories conditioned such that they can perform bidirectional communication, data management, storage, and analysis along with fault tolerance [163]. Numerous technologies are already present in factories to remove problems and provide a fully automated selfsufficient production line. The concepts of CPS and DT provide a new direction for smart manufacturing and healthcare by creating a closed loop between the physical world and digital model based on data acquisition, real-time data analysis, decision-making, and accurate execution. However, the DT model provides an effective and intuitive way of improvement in engineering. With the real-time data integration, DT model's ability to provide solutions can be improved. Fig. 9. depicts how digital models can be utilized to enhance the composition and functionality of CPS by providing capabilities of CC, predictive analysis, decision making, and big data analytics. From a technical point of view, there are three components that must work together for the construction of DT. Visual representation of a DT reference model is given in Fig. 10. The components are as follows: 1-An information model to extract the physical characteristics of a system. 2-A communication mechanism to transfer data bidirectionally between digital and physical systems. 3-A data processing algorithm or module to extract information from multi-source diverse data sets to create a real-time digital representation of the physical system.
FIGURE. 10. DT reference model
Information models are necessary to extract meaning from the large amount of data a system has received. The presence of data synchronization mechanisms is necessary between a digital model and a physical system. Otherwise, the connection between them will not be established. The Digital model will be a one-off snapshot of the physical counterpart. Standardization is key to reducing the heterogeneity of the data stream being sent to the DT information model. Fig. 11. enlists multiple standards that give details on information models for describing physical objects in the manufacturing domain. International Standard Organization (ISO) is playing an active role in the development of a dedicated standard for DT manufacturing [164]. According to [87], information models for product DT and information models for production DT are two subtypes of information models. For the information models for product DT, ISO 14649 [165] and ISO 10303 [166] are two outstanding standards. ISO 10303 provides a neutral data structure for exchanging product data between CAD systems. The AP242 [167] was created by combining AP203 and AP204 for Managed Model-Based 3D Engineering. With these information models, PMI information and geometric tolerance can be inserted into the system directly from product design files in the STEP AP242 model without the requirement of interpreting 3D drawings. These changes provide the communication necessary at various stages of product lifecycle along with autonomous process planning, manufacturing, inspection, and so forth. In the future, ISO 14649 [168] and ISO 10303-238 [169] (also known as STEP-NC) are planned to replace the ISO 6983 (RS274D) M and G code with an up-to-date associated language that can connect directly the CAD design data. For the information models for production DT, ISO 13399 [170] is utilized for computer-interpretable representation and exchange of industrial product data regarding tool holders and cutting tools. It provides an explanation of product data regarding cutting tools. The model has been used for CAM/CAD/CNC integration, product data management, tool management, and manufacturing resource planning. ISO 14649-201 [171] is a similar model utilized for specifying machine tool data required for cutting processes. MTConnect standard offers a semantic vocabulary for manufacturing hardware to provide contextualized, structured data with no proprietary format. OPC-UA provides communication within machines, from machines to systems, and between machines in the industry. The combination of MTConnect and OPC-UA helps ensure consistency and interoperability between MTConnect specifications and the OPC-UA specifications. A single information model cannot meet the heterogeneous requirements and wide range of DT applications. Previous studies suggest that a systematic information model development process guarantees maximum standard usability and conformance [172]. A bottom-up approach is suggested by OPC-UA and MTConnect community to allow the information models to be implemented in various new applications. The authors of [173] introduced a tri-model-based approach (i.e. digital representation, computational model, and graphbased model) for the development of product-level DT. The three models work alongside each other to simulate the characteristics and behavior of the physical system (i.e., ANET A8 3D printer). The digital representation of the 3D printer was made on Neo 4 J. Raspberry Pi 3B was utilized for data extraction and consolidation module. DT was utilized for dynamic scheduling in the job-shop, where the application of milling machine is making hydraulic values [106]. Dynamic scheduling is day-to-day decision-making. The incorporation of DT allows the physical and digital world data to perform more predictive analysis toward machine availability and to detect any abnormality for timely rescheduling.
In [174], implementation of DT in Computer Numerical Control Machine Tool (CNCMT) is theoretically very fruitful but there are numerous difficulties in its implementation. The example of a rolling guide rail was taken to validate the effectiveness and operability of the proposed consistency retention method by the authors for the CNCMT DT model. The rolling guide rail is a part of CNCMT and hence the future direction is the DT model of all components and parameters. The authors utilized a 5 axis laser drilling machine as a case study for the DT model [175]. Linear actuators and direct drive rotary improve the performance of multi-axis machine tools but without the mechanical gearing, it increases the nonlinear dynamic coupling between axes. Making it difficult for digital models to identify accurately. A new approach of estimating nonlinear multivariable dynamic models nonintrusively using in-process CNC information was proposed. Features like actuator force/torque ripples, nonlinear friction, multi-rigid body motion, and vibration etc. were recorded. High Precision Products (HPPs), with multidisciplinary coupling, are utilized in the application of marine, aerospace, and chemical. HPPs have compact and complex internal structures and the assembly process is dependent on manual experience. It can lead to poor consistency and low efficiency. A DT-driven assembly approach for HPPs is proposed in this paper by the authors [176]. A comparison between traditional and DT-driven assembly is also presented. The authors in [177] created a DT model of a small-scale knuckle boom crane for condition monitoring. Nonlinear Finite Element (FE) analysis was performed with input as payload weight. Characteristics such as strains, stresses, and the load were determined in real-time. Condition monitoring increases safety and reliability. The authors state that this approach can be applied to various robotic manipulators used in the industry. Faults in CNCMT may lead to less precision and affect production. Reliability is of paramount importance. Predictive maintenance is an effective way to avoid such failures. A hybrid DT-driven approach (i.e. DT modelbased and DT data-driven) is studied by authors [178] on cutting tool life expectancy. Results indicated that the hybrid approach is more accurate and feasible than a single approach.
Authors of [179] studied centrifugal pumps in ventilation, heating, and air-cooling (HVAC) system. DT models were created for continuous anomaly detection of pumps. The digital model helped in automated and efficient asset monitoring in Operation & Management (O&M).
Augmented Reality (AR) is utilized to realize the DT model of an EMCO milling machine in [180]. AR gives the operator control and ability to monitor the machine tool, while providing access to DT data at the same time. It allows for a consistent and intuitive human-machine interface to improve the efficiency of the manufacturing process. DT model of a 3-axis CNC engraving machine controlled via Arduino is created with real-time data of the position of the axis in [181]. A CAD model represents the digital model of the testbed. A data-driven DT model, in the combination of hybrid model prediction method based on deep learning technique Deep Stacked GRU (DSGRU), is created for predictive maintenance of the manufacturing machines. Testing is performed on vibration data of milling machine tool to show the performance of the DT model toward tool wear prediction [182]. Predictive maintenance of automotive brake system with ThingWorx IoT platform allowed braking pressure to be measured at various speeds. CAD model implemented in CREO simulation was used for prediction of brake wear [183]. Qualification is an important process that every product must pass. 3D tool printing has important applications in healthcare, automotive, and aerospace industries. The utilization of DT, with machine learning and big data, can reduce the number of trials and errors in order to create the desired product [184]. The utilization of cloud-based platforms to create DT is performed in [185]. The authors utilized a single edge micro-cutting machine tool in a collective cloud-based PLM platform (3D Experience from Dassault Systems). The DT model helped in estimating and simulating the behavior of the system under various cutting conditions. MQTT broker is utilized for connectivity through a brokerclient architecture between the physical system (a bending beam test bench) and its DT model [186]. FEA simulations are conducted to analyze the performance of the bending beam. The results are represented numerically and graphically in CAD. DT also has applications in helicopter industry. In [187], authors have worked to create DT of helicopter dynamic systems (i.e. swashplate rotor assembly). Manufacturers are interested in developing DT models to have the ability to predict the lifetime of mechanical parts. Data recorded during flights is utilized to simulate the loads the mechanical parts undergo. The simulation models will help in developing the new model of bearing and its validation based on bench tests. The authors of [188] simulated the cutting process of a CNC machine through the DT model. The simulation can help in reducing costs, decreasing material waste, reduce collision by tools, increase system life, and help simulate the cutting process to ensure accuracy and precision. There are numerous situations where an operator will work in collaboration with a robot or is present in the space of a robot. In [189], the authors worked towards creating a DT model to support the design, build, and control of humanmachine cooperation. A case study of an industrial assembly is considered for a human-robot collaborative. Any digital environment is prone to cyber-attack, and it is an open research direction. The authors of [190] analyzed cyber-attack modes in a collaborative robotic CPS. Details of severity and categorization of cyber-attacks and safety of the human worker during human-robot collaboration are provided. A two-pronged security strategy is devised and tested on teleoperation benchmark (NeCS-Car).
Controlling a group of robots working together without any conflicts is necessary for smooth operation of factories but it is problematic. A DT model for a multi-robot monitoring system is simulated to avoid collisions and detect robot movements in the real environment [191]. A six-degree-offreedom robot arm manipulator with OPC-UA providing connectivity is the case study. The design system can simulate a real-world scenario and help in monitoring industrial robots to enhance the production efficiency of the factories. The complexity of any system, product, or manufacturing process increases the chances of human-generated error. An overhead assembly operation from a vehicle assembly plant is considered by [192]. The DT of the human operator is created in the Siemens Technomatix suite. The DT helps in analyzing human anthropomorphic models to discover the boundaries in performing the assembly tasks based on weight, height, and gender. The DT of mobile robot design to assist the human operator in the assembly process will help evaluate process time, human-robot collaboration, and joint ergonomic impact to reveal limitations of DT in human-robot collaboration.
B. HEALTHCARE
The rapid population growth has placed a massive strain on existing healthcare resources. New technologies are necessary to help in fast, accurate, and economical solutions to medical emergencies, diagnoses, and procedures. Smart healthcare educates people about their health conditions and enables them to manage some of their conditions by themselves. IoT plays its role in healthcare through Healthcare-Internet of Things (H-IoT). It is a complex system of medicine, microelectronics, health systems, AI, and more [193]. This allows for remote monitoring of patients in hospitals and homes with a focus on enhancing healthcare quality, preventing and managing emergencies and reducing healthcare costs [194,195]. The vast implementation opportunities of DT in the area of healthcare and studies that will guide future research are emphasized in [196].
IoT also has a strong foothold in DT technology. DT in healthcare has numerous applications and open research problems. It can ideally replicate the human body, which employs a large data set and AI-powered models to replicate human physiology and provide possible answers to a range of clinical questions [197]. DT models can also be utilized to predict the outcome of various clinical procedures. Digital models will help young practitioners, doctors, and surgeons to work in a safe environment, conduct training procedures and perform testing on the digital human body. But many technical, privacy, and ethical issues need to be resolved before this can be practically happen. The implementation of ML and data mining algorithms will provide accurate outcomes of various medical procedures with real-time data and processing capability [198]. Another example of DT is optimizing hospital lifecycle. Edge, Fog, and Cloud computation are used in the creation of a network. Cloudbased IoT [199] can overcome problems caused by processing capabilities and storage limitations. A large amount of data is transmitted in this cloud based IoT paradigm. The transmission of a large amount of data can cause latency and requires high-bandwidth internet connection to name a few constraints. The application that operates in real-time cannot be utilized. Edge and fog computation are the solutions to the problem of latency. IoT networks developed in this aspect will have three parts of device, edge, and cloud. It will have several benefits but also give rise to various problems in design and development [200][201][202][203]. A cloud-based DT system for geriatric healthcare was proposed by [204]. The authors introduced a reference framework of Cloud-DTH, which is the combination of cloud architecture and DT healthcare (DTH). The aim was to provide computational and management capabilities in healthcare systems. The author worked on two case studies, but they lacked performance and results in the evaluation. It is not clear in the prediction process whether AI or ML algorithms were used. A successful DT healthcare system relies on efficient and accurate machine learning algorithms to manage multiple processes. The healthcare requirements can be divided into functional and non-functional. Functional needs are completely distinctive and work according to predefined responsibilities. There are open areas in nonfunctional needs, attributes that can define system quality, in the healthcare system i.e. lower power connectivity, quality of service, system reliability, interoperability, higher efficiency, and real-time operations [205]. The authors of [206] provide an extensive literature review of IoT and associated technologies in healthcare. The correct cyber resilience technology and policy are important to maintain and preserve a healthcare digital twin. Authors of [207] pointed toward vulnerability detection as an essential technology for cyber resilience in healthcare DT. Deep Learning (DL) is implemented to overcome the limitation of machine learning in vulnerability detection. They implemented a novel deep neural model to capture bidirectional context relationships among the risky code keywords. It showed improved results as compared to the latest DL-based methods for vulnerability detection. Another example is the implementation of Artificial Neural Network (ANN) on patient data for decision making and monitoring health as discussed by [208]. GE healthcare has been using DT for hospital management optimization. They have focused on predictive analytics platforms and AI capabilities to transfer huge patient data into actionable intelligence. GE healthcare designed the "Capacity Command Center" that is implemented in Johns Hopkins Hospital in Baltimore for simulations and better decision-making capabilities. Mater Private Hospital (MPH) in Dublin is optimized by DT technology with help of Siemens Healthineers. One of the tasks performed by Siemens Healthineers and MPH was to implement DT in the radiology department with the help of AI computer model for the department and its operations. MPH was able to overcome the challenges of increasing clinical complexity, aging infrastructure, delays, increased patient demands, and the large bulk of data with the help of DT.
In [209], the authors supported the implementation of DT technologies in medicine i.e. in medical cyber-physical systems [210,211]. Kocabas et al. [212] worked in the direction of medical cyber-physical systems having multiple layers of data acquisition, data analysis, cloud systems, and actuators. Combining Wireless Body Area Network (WBAN) with IoT networks and cloud computation has been considered as an open research area in healthcare applications [213]. Wearable devices and AI were implemented for human data acquisition and analysis to simulate human processes such as user behavioral motivation understanding, emotion recognition, and recognition of user intent [214][215][216]. Furthermore, it has helped to create interactive games to help artists utilize their creativity. Lastly, it can be used to carry out health monitoring and provide instructions to help users improve their health. Psychologist have started to utilize physical activity levels using actigraphs in order to predict the onset of various episodes of bipolar disorder [217]. The authors of [218] have put forward the idea of creating collaboration of computational simulations with tissue engineering for higher reliability, predictable and accurate clinical outcomes. A framework of DT in remote surgery is provided in [219]. The authors of [220] presented a contextaware healthcare system using the DT framework. A rhythms classifier model, of ECG, was built utilizing ML to detect heart problems and diagnose heart disease. Cardio twin architecture is utilized for Ischemic Heart Disease detection on the edge [221]. Similarly, we can utilize a High Definition (HD) camera to take snapshots for monitoring the process. A comparison is necessary for all three data capturing methodologies to validate the optimal approach for various DT applications. Furthermore, big data analysis is necessary before utilizing the data for monitoring, diagnosis, and prediction. Data analysis and management is an open research issue. Instead of relying on cloud computation, integration of edge, fog, and cloud infrastructure is necessary to distribute the responsibility of data processing. The challenge related to enormous data acquisition, analysis, limited awareness of methodology and modeling is still unresolved [227]. In terms of healthcare, edge, fog, cloud, AI-ML algorithms, and big data analytics holds importance in processing data for monitoring, diagnosis, selection of best surgical method, comparison with hundreds of previous patients, and predictive analysis. In this field, problems like scalability, energy, co-design approach, data privacy, data storage, and services available to heterogeneous sources are open to research [228]. There are two architectures available for the creation of DT models i.e., server-based, and edge-based. In server-based, the centralized server receives the complete data to perform data analysis and creates a DT model. This method is much more economical and easier to maintain. In edge-based models, the data is routed back to the centralized server, but some data analysis is carried out at the 'edge' of the system. It is the pre-processing performed not the raw data at the edge. It has its benefits, if designed correctly, but it is more complex to maintain. Existing DT applications utilize the concept for monitoring and prediction. But future research can be to provide DT models for decision-making support for human operators. The ultimate purpose of the industrial revolution is to provide autonomy to the systems. For example, human presence is essential in smart manufacturing but autonomous feedback control with minimum latency between the DT model and the physical system can provide the support for decision-making. DT has the potential to strengthen the integrity of the physical system by providing improved observation, testing, and verification process. But a corrupted DT can be used to mislead operators. Cyber-attacks can create inaccuracies in DT. Any analysis or prediction, performed by a cyber-attack affected DT, is likely to be unreliable. Not only that, data modification and damage by cyber-attacks, in transmission or storage, must also be avoided. DT can create new failure points, for cyber-attacks to take control of the system, damage the system, mislead operators, or listen to data being communicated between the DT and the physical system. In healthcare, around 7.7 million patient data from LabCorp Clinical Laboratory was compromised by cyberattack in July 2019 [229]. In May 2019, data of 11.9 million patients from Quest Diagnostics was affected by cyberattack [230]. Manufacturers of various industries have concerns regarding high cost and data security on the applications of DT [231]. Research needs to be carried out to ensure data protection.
IX. CONCLUSION
This paper presents an overview of the integration of numerous enabling technologies for the creation of DT along with core concepts, standards, reference models, and research work on DT in smart manufacturing and healthcare.
Research has been conducted throughout the world on DT but there is a gap towards the implementation of flexible and real-time synchronized DT models, IoT limitations, and control of the physical system through the digital model. Communication technologies like 5G, 6G, or IEEE 802.11ah, etc. allow for various DT applications to be tested. However, selecting and implementing an appropriate technology to fulfill the application IoT requirements and successfully provide bi-directional data/information transfer for the creation of DT models is a challenge. Cost limitations, complexity of implementation, integration between DT models and within DT model are other challenges researchers are facing. The common data collection and processing methods do not fulfill the needs of DT. Sole reliance on CC will not fulfill the requirement of processing a large amount of data quickly and providing useful data for DT models. Edge-fog-cloud computation and AI-ML can provide the necessary support for preprocessing data, diagnosis, and prognosis on data along with reducing the load on communication channels for data transfer and lessening the burden on CC. These considerations are not only to be implemented in the domain of industries, but healthcare, robotics, smart city, oil & gas, and education sectors too. Tughrul Arslan holds the Chair of Integrated Electronic Systems with the School of Engineering, University of Edinburgh, Edinburgh, U.K. He is a member of the Integrated Micro and Nano Systems (IMNS) Institute and leads the Embedded Mobile and Wireless Sensor Systems (Ewireless) Group with the University (ewireless.eng.ed.ac.uk). His research interests include developing lowpower radio frequency sensors for wearable and portable biomedical applications. He is the author of more than 500-refereed papers and inventor of more than 20 patents. Prof. Arslan is currently an Associate Editor for the IEEE Transactions on VLSI Systems and was previously an Associate Editor for the IEEE Transactions on Circuits and Systems I (2005)(2006) and IEEE Transactions on Circuits and Systems II (2008-2009). He is also a member of the IEEE CAS executive committee on VLSI Systems and Applications (1999 to date), and a member of the steering and technical committees of several international conferences. He is a co-founder of the NASA/ESA Conference on Adaptive Hardware and Systems (AHS) and currently serves as a member of its steering committee
Tharmalingam
Ratnarajah (Senior Member, IEEE) is currently with the Institute for Digital Communications, The University of Edinburgh, Edinburgh, UK, as a Professor in Digital Communications and Signal Processing. He was the Head of the Institute for Digital Communications during 2016-2018. His research interests include signal processing and information-theoretic aspects of 6G wireless networks, full-duplex radio, mmWave communications, random matrices theory, interference alignment, statistical and array signal processing, and quantum information theory. He has published over 400 publications in these areas and holds four U.S. patents. He has supervised 16 Ph.D. students and 21 post-doctoral research fellows and raised 11+ million USD of research funding. He was the coordinator of the EU projects ADEL (3.7M€) in the area of licensed shared access for 5G wireless networks, HARP (4.6M€) in the area of highly distributed MIMO, as well as EU Future and Emerging Technologies projects HIATUS (3.6M€) in the area of interference alignment and CROWN (3.4M€) in the area of cognitive radio networks. Dr. Ratnarajah was an associate editor of IEEE Transactions on Signal Processing, 2015-2017 and Technical co-chair, The 17th IEEE International Workshop on Signal Processing advances in Wireless Communications, Edinburgh, UK, 3-6, July 2016. Dr. Ratnarajah is a Fellow of Higher Education Academy (FHEA). | 2022-03-09T19:03:19.974Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ce84e89db240e7c89990aa56060776681795233c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09726212.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "6c0ec714c51b95d63a56ef142454061e7aa5a718",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
745531 | pes2o/s2orc | v3-fos-license | The effect of genetic variation on promoter usage and enhancer activity
The identification of genetic variants affecting gene expression, namely expression quantitative trait loci (eQTLs), has contributed to the understanding of mechanisms underlying human traits and diseases. The majority of these variants map in non-coding regulatory regions of the genome and their identification remains challenging. Here, we use natural genetic variation and CAGE transcriptomes from 154 EBV-transformed lymphoblastoid cell lines, derived from unrelated individuals, to map 5376 and 110 regulatory variants associated with promoter usage (puQTLs) and enhancer activity (eaQTLs), respectively. We characterize five categories of genes associated with puQTLs, distinguishing single from multi-promoter genes. Among multi-promoter genes, we find puQTL effects either specific to a single promoter or to multiple promoters with variable effect orientations. Regulatory variants associated with opposite effects on different mRNA isoforms suggest compensatory mechanisms occurring between alternative promoters. Our analyses identify differential promoter usage and modulation of enhancer activity as molecular mechanisms underlying eQTLs related to regulatory elements.
F or more than a decade, numerous genome-wide association studies (GWAS) have identified thousands of single nucleotide variants (SNVs) associated with human traits and diseases. The contribution of SNVs located within promoter and enhancer elements to disease etiology is well established 1,2 . However, understanding the consequences of these regulatory variants on the human transcriptome remains a major challenge for accurate interpretation of GWAS signals and for the precise identification of causal variants. This issue has been addressed in population studies combining individual genotypes and transcriptome profiles; a design capable of finding associations between SNVs and mRNA levels, namely expression quantitative trait loci (eQTLs) 3,4 .
Several observations support the functional implication of eQTLs on gene promoters. First, eQTLs have been recurrently found enriched within promoter regions of their associated genes [4][5][6] . In addition, regulatory variants have been found associated with alternative transcript usage 5,7,8 , including variation in mRNA 5′-end position. Also, given that human genes have, on average, more than four TSSs 9 (transcriptional start sites), differential TSS, or promoter usage deserves to be further investigated to better understand eQTL effects.
Moreover, eQTLs are also enriched in enhancer elements 5,6,10,11 . Briefly, enhancers are cis-regulatory regions located remotely from promoters and contribute to the regulation of gene expression by increasing transcription levels and providing information not encoded in proximal promoters, such as the developmental timing or tissue specificity of expression. Enhancers contain binding sites for transcription factors and chromatin looping mediator proteins necessary for them to act on target genes in a distance-independent manner 12 . Yet, the systematic identification of enhancers' target genes requires precise description of enhancer-promoter interactions, based on chromatin conformation assays or functional experiments using genome editing. As a consequence, the target genes of most enhancers remain poorly annotated, increasing the difficulty of interpreting regulatory variants located within enhancers, and in understanding their contribution to human disease.
We hypothesize that mapping genetic variants associated with promoter and enhancer functions can provide novel insights into the mechanism through which eQTLs exert their effects on gene expression. To this end, we quantify genome-wide promoter usage and enhancer activity using CAGE (Cap Analysis of Gene Expression) 13 transcriptome profiling and test the resulting molecular phenotypes for association with nearby genetic variants to discover cis-QTLs (Fig. 1a). We report the discovery of 5376 and 110 QTLs that are associated with promoter usage and enhancer activity, respectively. These analyses suggest a strong implication of genetic variants in the molecular regulation of promoter usage. Finally, this study provides an original approach, using CAGE technology, to decipher possible mechanisms of how genetic variation exerts its effect on gene expression through the modulation of enhancer activity. Fig. 1b). According to FANTOM atlas annotations, the CAGE peaks are associated with 13,351 genes and 7424 intergenic TSSs, indicative of potential novel transcripts. Individual genotype information was retrieved from previous studies 14, 15 and, following imputation and filtering, a total of 7,508,202 variants were kept for downstream analyses. Any sample mislabeling between sequencing and genotyping data was detected and fixed with an efficient approach 16 that we developed ( Supplementary Fig. 1c).
Using normalized CAGE peak expression values and genotypes ( Supplementary Fig. 2), we mapped promoter usage-QTLs (puQTLs) in cis using the QTLtools software 17 . Topologically associating domains 18,19 (TADs, Supplementary Fig. 3a) were used to define the tested cis-windows, assuming that proximal and distal regulatory elements acting on a same gene reside within the same TAD. Following this procedure, we mapped 5376 puQTLs at the significance threshold of 5% FDR ( Supplementary Fig. 3b). These puQTLs consist of 4876 unique regulatory variants associated with 5376 CAGE peaks (read promoters) assigned to 2697 protein-coding and 489 non-coding genes, as well as 849 putative novel transcripts. We then combined the human transcript catalog (GENCODE-V19) 20 and histone marks profiling 21 to annotate the puQTL CAGE peaks not associated with genes. The majority of these carry histone marks characteristic of promoter regions (n = 515) and can be classified as either antisense promoters (n = 271) or putative promoters localized within a gene or in an intergenic region (n = 244) (Fig. 1b). Interestingly, 227 CAGE peaks carry histone marks characteristic of enhancer regions and thus can be considered as enhancer-RNAs (eRNAs) 22 .
The general features of puQTLs are similar to eQTLs. They localize close to TSSs (Fig. 1c), within open chromatin regions (DNase I hypersensitive sites, Supplementary Fig. 3c) carrying histone modifications specific to active transcription (H3K27ac, H3K4me1, and H3K4me3) and are depleted in regions of repressive H3K27me3 marks (Fig. 1d). We calculated the enrichment of transcription factor-binding sites overlapping puQTLs, using ChIP-seq data (ENCODE data 23 , Supplementary Fig. 3d). Among the enriched transcription factors, we detected CEBPB (p value = 8.99 × 10 −5 ) involved in immune and inflammatory responses, IKZF1 (p value = 9.99 × 10 −6 ) implicated in the regulation of lymphocyte differentiation and BCL11A (p value = 1.09 × 10 −4 ) downregulated during hematopoietic cell differentiation and thus possibly associated with B-cell ARTICLE malignancies. We also found enrichment for the transcriptional co-activator, EP300 (p value = 9.99 × 10 −6 ), similarly to the GTEx study 6 observation on splicing-QTLs. We then estimated the significance of the overlap between puQTLs and disease-associated variants reported in the GWAS catalog 24 . We found 1024 puQTLs (out of 4876 unique variants) overlapping linkage disequilibrium (LD) intervals containing GWAS hits (OR: 1.25, CI: 1.16-1.35, p value = 0.0009). Such significant enrichment has been previously reported for QTLs associated with other molecular phenotypes (eQTLs 5,6 , splicing QTLs 7 , and methylation QTLs 25 ). To refine the analysis and decipher whether the effect of the puQTLs and GWAS variants are concordant (i.e., tagging the same functional variant), we applied the regulatory trait concordance (RTC) 26 method. Briefly, RTC accounts for local LD structure and regresses out the genetic effect of the GWAS variant from the CAGE peak expression data, to measure if the puQTLs association is still significant. The RTC scores range between 0 and 1, reflecting not concordant or concordant effects of the pair of puQTL and the GWAS hit, respectively. We found 51 puQTLs passing the high-confidence concordance threshold (RTC > 0.9, Supplementary Table 2) for 35 GWAS hits associated with a variety of phenotypic traits including systemic lupus erythematosus and inflammatory bowel disease. This finding proves the potency of our approach to detect functional variants implicated in disease etiology.
The FANTOM promoter atlas CAGE peaks are ranked, for each gene, according to the total read counts observed in their data sets 9 . Considering this classification, we find that 52% of the puQTLs are associated with secondary CAGE peaks potentially involving alternative promoters ( Supplementary Fig. 4a). In addition, 2289 puQTL-associated genes display more than one CAGE peak and thus likely have several promoters (Supplementary Fig. 4b). Together, these observations suggest that puQTLs are potentially implicated in the regulation of differential promoter usage.
Regulatory variants associated with promoter usage. We classified the genes associated with puQTLs into five groups ( Fig. 2a, b). Group-1 includes single promoter genes (991 genes) and genes with several CAGE peaks distant by less than 200 nt, which we consider as 5′ RNA variations under the regulation of a single promoter element. Then we considered puQTLs effect sizes (ß, regression slope) and direction to sort multi-promoter genes. First, 1550 multi-promoter genes with puQTLs significantly affecting a single CAGE peak constitute group-2 ( Fig. 2c). Group-3 includes 217 multi-promoter genes with puQTLs having opposite effects (ß of different signs) on distant CAGE peaks (Fig. 2d). Group-4 includes 375 multi-promoter genes with puQTLs having concordant effects (ß of same sign) on different CAGE peaks with similar effect sizes (Fig. 2e). Lastly, group-5 includes 127 multi-promoter genes with puQTLs having concordant effects (ß of same sign) on different CAGE peaks with different effect sizes (Fig. 2f). We hypothesized that not all puQTLs have an effect on the total mRNA production from a gene and that there exists either compensatory or antagonist mechanisms among different promoters. We addressed this question by estimating the fraction of puQTLs that are also associated with total mRNA levels, measured from RNA-seq for 154 individuals 4,5 (Supplementary Table 3). We measured the enrichment of low p values according to the proposal of the Geuvadis consortium 5 . Briefly, we applied the π 1 statistics 27 to estimate the proportion of truly alternative features for the 2894 puQTLs associated with genes for which mRNA levels were quantified. We find 83% of the puQTLs being also significant eQTLs ( Supplementary Fig. 5). This proportion decreases to 68% for the puQTLs that have opposite effects on multi-promoter genes (group-3), while it ranges between 81 and 88% for the other groups (Fig. 2g). We concluded that about a third of group-3 genes, whose total mRNA levels do not significantly vary within the population, are associated with puQTLs triggering promoter shifts and therefore generating different isoform prevalence. An illustrative example is the TTC23 gene (ENSG00000103852.8) from group-3. TTC23 is represented in our data set with three CAGE peaks, localized in two promoter regions 1.6 kb apart (Fig. 3a). The SNV rs8028374 was mapped as a puQTL with significant opposite effects on the two promoter regions (ß = 0.56 for p1 and ß = −1.03 for p2) (Fig. 3b). Notably, we did not find significant eQTL for the TTC23 mRNA level in data from Stranger and colleagues 4 (Fig. 3c) or the GTEx consortium 6 . Additional examples include the CD97 (ENSG00000123146.15) and FAM76B (ENSG00000077458.8) genes ( Supplementary Fig. 6a, b). The partial shifts observed between promoters reveal a plasticity in promoter usage that is potentially implicated in preserving suitable steady-state mRNA levels. Fig. 4 eaQTLs mapping and integration with puQTLs and eQTLs. a Significance relative to distance for each enhancer best-associated eaQTL plotted for a 1 Mb window. b Total counts of histone marks ChIP-seq signals (GM12878 cells, ENCODE data) for the 500 kb regions flanking eaQTLs. c Schematic representation of the procedure to identify triplets of regulatory variants, enhancers, and paired promoters. d CAGE signal at the ARL4C locus and associated enhancer region. The variant (rs1464264) mapped as a puQTL for ARL4C-associated CAGE peaks (p1, p2) and eaQTL for paired enhancers (e1, e2) is shown. Normalized promoter (e), enhancer (f), and mRNA (g) expression relative to each genotype group are plotted for the entire population One example of a multi-promoter gene for which promoter usage analysis provides a hypothetical mechanism underlying the effect of an associated eQTL is DENND2D (ENSG00000162777.12). The seven DENND2D CAGE peaks are distributed in three distinct promoters within a 4.3 kb region (Fig. 3d). We detected only one promoter region, with two CAGE peaks, significantly associated with a puQTL (rs35430374, p value = 2.54 × 10 −11 for p4 and p value = 1.50 × 10 −12 for p7, Fig. 3e). This signal was replicated using the expression values for exons specific of transcript isoforms produced from either of the three promoter regions (Supplementary Fig. 6c). Remarkably, the same variant is detected as an eQTL (p value = 1.73 × 10 −6 ) for total mRNA levels (Fig. 3f), which suggests that activation of an alternative promoter (here p4) results in the observed eQTL effect. Indeed, the CAGE peak p4 appears to be driving the largest fraction of the observed variance for the mRNA level. Similar cases were observed among puQTL associated with genes of group-3, such as MCM8 (ENSG00000125885.9) and TLR1 (ENSG00000174125.3) (Supplementary Fig. 6d, e).
Taken together, the integration of puQTLs and eQTLs provides new insights into the mechanisms underlying eQTLs, explaining the prevalence of transcript isoforms and the relative participation of alternative promoters to the transcriptional output of genes.
Regulatory variants associated with enhancer activity. Among the previously non-annotated CAGE peaks associated with puQTLs, the 213 located in enhancer regions (Fig. 1b), as defined by characteristic chromatin modification patterns 21 , are TSSs of eRNAs 22 . Bidirectional-capped RNA production has been described as a hallmark of active enhancers and used to detect and measure enhancer activity in numerous human and mouse cell types 28,29 .
We sought to use the CAGE transcriptome profiling to gain further insights into the implication of regulatory variants on enhancer regions, mapping variants associated with eRNA levels, considered here as a proxy of enhancer activity.
Following a comparable approach to that used for promoter usage, we mapped the CAGE tags to the FANTOM enhancer atlas elements 28 and quantified the expression of 3558 transcriptionally active enhancers. We performed cis-QTL mapping for enhancer activity using TADs as cis-windows, mapping 110 enhancer activity-QTLs (eaQTLs) associated with enhancer transcriptional activity at the significance threshold of 5% FDR ( Supplementary Fig. 7a). eaQTLs are enriched in the proximity of associated enhancers (Fig. 4a), within open chromatin regions (Supplementary Fig. 7b), and in loci carrying histone marks specific of enhancer elements and active transcription (Fig. 4b). In accordance with a previous report for enhancers and promoters 30 , higher H3K4me1 signals than H3K4me3 signals are observed with ChIP-seq at eaQTLs sites, while the opposite trend characterized puQTLs despite a fraction of them overlapping enhancer elements (Supplementary Fig. 7c).
We hypothesized that a combination of puQTL and eaQTL analyses may contribute to the identification of regulatory variants associated with gene expression, which effects are essentially exerted on enhancer regions. To address this question, we used the Enhancer-FANTOM Robust Promoter associations 28 to link 66 enhancers with 293 CAGE peaks in 322 unique eaQTLenhancer-promoter triplets. Among these triplets, 39 include eaQTLs which are also mapped as puQTLs or in LD with a puQTL (Spearman's ρ > 0.8) (Fig. 4c). An illustrative example is the ARL4C gene (ENSG00000188042.5) (Fig. 4d). The rs1464264 variant was mapped as a puQTL for the two ARL4C promoters (p value = 1.64 × 10 −8 for p1 and p value = 5.33 × 10 −10 for p2) (Fig. 4e), as an eaQTL for two enhancers (p value = 2.62 × 10 −7 for e1 and p value = 4.26 × 10 −5 for e2) paired with ARL4C (Fig. 4f) and as an eQTL (p value = 7.74 × 10 −7 ) (Fig. 4g). These observations allow us to build a hypothetical mechanism for the ARL4C-associated eQTL, which includes increased activity of the enhancer located~200 kb downstream of the ARL4C promoter. The higher level of ARL4C mRNA observed in the presence of the alternative allele state is, under this hypothetical model, driven by the increased activity of the distant enhancer region resulting from the genetic variation. A similar scenario was observed for the SWAP70 gene (Switching B-cell complex subunit 70, ENSG00000133789.10) and an enhancer region localized 50 kb upstream of it ( Supplementary Fig. 7d-g).
Finally, for the set of 39 regulatory variants identified as both eaQTLs and puQTLs, we assessed causal molecular relationships for model networks including (1) the eaQTL variants, (2) the enhancer transcriptional activity, and (3) the paired promoter expression values. Using causal inference testing (cit R package) 31 for 92 triplets (39 eaQTL-associated enhancers paired with 78 promoters), we tested independently two models that considers enhancer transcriptional activity as the molecular mediator of gene expression and vice versa ( Supplementary Fig. 8a). The effect of regulatory variants, mapped as puQTLs and eaQTLs, on enhancer activity was found causal for the association observed with the target gene expression level for 17 triplets at the significance threshold of 5% FDR, involving 12 eaQTLs ( Supplementary Fig. 8b).
Taken together, we provide here the first set of human regulatory variants associated with enhancer activity based on eRNA quantification and illustrate the potential of using complementary molecular phenotypes to dissect the mechanism (s) underlying enhancer related eQTLs.
Discussion
We described here a collection of 5376 puQTLs and 110 eaQTLs; regulatory variants associated with promoter usage and enhancer activity, respectively. By levering the CAGE technology to quantify these molecular phenotypes, this study highlights how CAGE transcriptome profiling coupled with QTL mapping can help dissect the genetic mechanisms underlying eQTLs and potentially disease-associated variants.
We find extensive overlap between puQTLs mapped from CAGE data and eQTLs mapped from RNA-seq, as a result of the expected high correlation between the mRNA quantification provided by both technologies 32 . While the analysis of exon usage with RNA-seq requires increased sequencing depth 33 , it enables an exon-wise quantification that complements the specific TSS information provided by CAGE. The combination of the two approaches (i.e., RNA-seq and CAGE cis-QTL mapping) therefore constitutes an effective strategy to give a broader view of the molecular mechanisms underlying regulatory variants. We leveraged these advantages to discover puQTLs involved in differential promoter usage and by extension to differential transcript isoform production. Our approach identified, among genes of groups-2, -3, and -5 (Fig. 2), regulatory processes for 5′-end transcript variations adding on the current knowledge of alternative transcript-splicing QTLs 5,7,8 . While we opted to analyze the effects of genetic variation on differential promoter usage, a recent study has mapped regulatory variants associated with the TSS usage (tssQTLs) by performing single nucleotide resolution TSS phenotyping in fruit-flies using CAGE 34 . They describe tssQTLs not affecting transcript levels, in line with our observations. The low expression level and poor annotation of lncRNAs 35 limit the power to identify lncRNA-eQTLs 36 . Nevertheless, eQTL studies on lncRNAs, even restricted to a few hundred non-coding genes, established a substantial contribution of lncRNAassociated regulatory variants to human phenotypes 5,36-39 . As anticipated by a recent report 40 , and revealed in our study with the detection of puQTLs associated with 489 lncRNAs and 271 antisense transcripts, the precise genome-wide TSS mapping and the accurate quantification of low-expressed non-coding transcripts are complementary features that CAGE can provide for conducting QTL mapping, compared to RNA-seq. Moreover, our strategy could further contribute to the characterization of potential roles for lncRNAs in human traits, by combining it with the recently produced atlas of putative functional human lncRNA 41 . This atlas has over 9000 lncRNAs, including about 3000 enhancer-associated lncRNAs.
Evidence supporting the so called "multiple enhancer variant" hypothesis for GWAS traits has been reported for loci carrying multiple regulatory variants within enhancers and cooperatively altering the expression of target genes 42 . Although highthroughput reporter assays have been used to test regulatory consequences of non-coding variants on reporter genes 11,43,44 , the lack of native chromatin context represents the main limitation of these methods that do not investigate epistatic interactions between multiple variants. The approach developed in our study to map eaQTLs constitutes a potential strategy to unravel regulatory mechanisms involving multiple variants within enhancer elements.
Overall, this study reveals that differential promoter usage is an important consequence of functional variation in the human genome. Our eaQTL mapping analysis provides the opportunity to dissect mechanisms underlying regulatory variants located within enhancer elements. Finally, our study highlights how CAGE transcriptome profiling coupled with QTL mapping furthers our understanding of eQTL effects and contributes to the effective interpretation of disease-associated variants.
Methods
Cell culture. EBV-transformed LCLs (Supplementary Table 1) purchased from the Coriell Cell Repository (CEU, n = 86, with the authorization of the ethical committee of the University of Geneva Medical School) or from the GenCord collection (n = 68, informed consent was obtained from all human subjects and the project approved by the local ethics committee at the University Hospital of Geneva CER 10-046) 45 were cultured in conventional medium for LCLs (RPMI 1640, GlutaMAX; Gibco) with 15% fetal bovine serum (Gibco), 50 U ml −1 penicillin and 50 µg ml −1 streptomycin (Gibco). While harvesting cells in growing phase (<10 6 cells ml −1 ), culture media were systematically tested for mycoplasma infection (Venor GeM Mycoplasma detection kit, Sigma-Aldrich) prior to proceeding with RNA extraction.
RNA preparation. For each cell line, a nucleus-enriched RNA fraction was isolated from 20 million cells, as detailed in Fort and colleagues 46 . Briefly, cells were first lysed in chilled lysis buffer (0.8 M sucrose, 150 mM KCl, 5 mM MgCl 2 , 6 mM 2-mercaptoethanol, and 0.5% NP-40) and centrifuged for 5 min at 10,000×g (4°C). Nuclei pellets were washed twice with lysis buffer before resuspension in TRIzol Reagent (Life Technologies). The RNeasy kit (Qiagen) was used according to the manufacturer's protocol to extract nucleus-enriched RNA fractions. During the RNA purification process, samples were treated with DNase I (TURBO DNA-free kit, Ambion) following the manufacturer's recommendations.
CAGE library preparation and data processing. CAGE libraries were prepared from 3 µg of RNA, using the reagents and following the protocols published by Takahashi and colleagues 47 . Briefly, the initial reverse transcription was performed using random primers and in the presence of sorbitol and trehalose. Then, the enrichment of capped RNAs was obtained with initial oxidation of the 5′-cap RNA diol group, resulting in a dialdehyde that was then coupled with long-arm biotin hydrazide before capture of biotinylated RNA/cDNA hybrids on streptavidincoated magnetic beads. Samples were treated with RNase-I, cleaving singlestranded RNA and discarding cDNA that did not reach the 5′-cap. Finally, RNA/ cDNA hybrids were denatured with alkali to recover cap-selected single-stranded cDNAs. Sample multiplexing was achieved by introducing barcode sequences within 5′-linkers that were ligated to the 3′ extremity of first-strand cDNA. The 5′linkers were also used for priming the second-strand cDNA synthesis and for CAGE tag generation using the EcoP15I restriction enzyme. Following 3′-linker ligation and prior to loading on the sequencing platform, a final CAGE library amplification using nine PCR cycles was performed.
CAGE libraries were sequenced on the Illumina HiSeq 2500 platform with a read length of 50 bases. Sequences with ambiguous base calling were discarded, samples reads split by barcodes and artefactual linker/adapter sequences removed using TagDust (v2.2) 48 . Reads were of 26-42 bases in length. CAGE tags were mapped to the reference genome hg19/GRCh37 using Delve (V1.0) 49 and Burrows-Wheeler Aligner (BWA V0.5.6) 50 . Two mismatches were allowed for the mapping procedure and only reads with MapQ values over 20, and therefore mapping to single loci of the reference genome, were used in our analyses. Finally, reads mapping to ribosomal RNA were eliminated.
Of the 164 samples originally sequenced, 8 with less than 5 × 10 6 mapped CAGE tags were discarded, in line with sequencing depth recommendations for the CAGE technology 47 .
Promoter expression. Genomic coordinates of 195,296 robust autosomal human FANTOM CAGE peaks and their gene assignment annotations were retrieved in May 2016 from http://fantom.gsc.riken.jp/5/data/. CAGE tag counts per CAGE peak were normalized for sequencing depth, converting tag counts to tags per million mapped reads (TPM) and, similarly to the FANTOM promoter atlas 9 , TPM values were further normalized between samples using the relative logarithmic expression (RLE) normalization procedure from edgeR 51 . We applied a minimum expression threshold on the mean expression over all individuals included in the study of 0.5 RLE-TPM ( Supplementary Fig. 1b), and constructed an expression matrix including 38,759 autosomal CAGE peaks for the 154 individuals.
Genotype data. GenCord individuals were genotyped with the Illumina 2.5M Omni chip in a previous study 15 . Variants were imputed into 1000 Genomes phase-3 14 using SHAPEIT2 (V2.20) 52 and IMPUTE2 (V2.3.2) 53 , yielding 9.1 × 10 6 SNVs. The genotyping data for the CEU individuals included in our study were retrieved from the whole-genome sequencing analyses performed by the 1000 Genomes Project Consortium 14 . We combined these two data sets and filtered for an information score above 0.5 and a minimum alternative allele counts of 10. We were left with genotype data at 7,508,202 autosomal variants for the 154 individuals.
Genotype sequencing data consistency control. Allelic consistency between genotype and CAGE tag sequences was assessed with the match BAM to VCF (MBV) methods (QTLtools package V1.1) 16 . Of the 156 samples passing the sequencing depth threshold, no amplification bias was detected and samples from two individuals, with suspicion of cross-sample contamination, were removed from the study (Supplementary Fig. 1c).
Mapping QTLs. We mapped puQTLs and eaQTLs using QTLtools (V1.1) 17 , with the following sets of covariates: for the puQTLs, the first 3 PCs were derived from genotype data and the first 20 PCs were derived from promoter-normalized expression values. We controlled for stratification due to sample collections ( Supplementary Fig. 2a) and library preparation batches ( Supplementary Fig. 2b) in the normalized promoter usage data. For the eaQTL mapping, we used the first 3 PCs derived from genotype data and the first 12 PCs derived from enhancernormalized activity values.
We delimited the set of variants to be tested per molecular phenotype by using TADs, as defined by Hi-C data on LCLs 19 . To determine whole-genome significance, first 1000 permutations were performed to adjust nominal p values for the number of independent tests performed for each promoter or enhancer per cis-window. Second, adjusted p values were corrected for the total number of promoters or enhancers tested genome-wide using the q value R package (V2.2.2) 54 . We finally extracted puQTL or eaQTL with q value <0.05, which corresponds to a 5% FDR.
Enrichment analysis. The QTLtools fenrich module (V1.1) 17 tests if a set of QTLs fall within functional annotations more often than is expected by chance. We used this module with annotations retrieved from the UCSC Table Browser GWAS hits enrichment. To assess how many puQTLs are overlapping GWAS variants, we selected puQTLs either matching the variants reported in the GWAS catalog 24 or near to a GWAS variant (±500 kb) with a R-squared greater than 0.5. To estimate the expected overlap by chance of puQTLs and GWAS hits, we preformed 1000 permutations using random variants with allelic frequencies and distances to TSSs that are similar to those of puQTLs.
The RTC analysis was performed using the QTLtools rtc module (V1.1) 17 using the GWAS catalog 24 (accessed in May 2017).
Classification of puQTL-associated genes. We initially grouped puQTLassociated genes with either a single CAGE peak or with several CAGE peaks when less than 200 nt apart. The other multi-promoter genes were categorized based on the effect of puQTLs on their different CAGE peaks. To this end, the effect size and associated p value for each CAGE peak of corresponding puQTL-associated genes were calculated, and puQTL-promoter associations with a p value <0.05 were considered. We grouped genes associated with puQTLs having significant concordant regulatory effects (ß, regression slope) into group-4 and group-5 based on the effect size ratio (ER = |max(ß)|/|min(ß)|), grouping genes with different effects on CAGE peaks when ER > 2.
RNA-seq mRNA quantifications and eQTL mapping. We retrieved mapped RNA-seq data (BAM files) for 154 Central European individuals 4,5 (Supplementary Table 2). We performed gene and exon quantification using QTLtools quant 17 and GENCODE-V19 as the reference for transcripts. Genotype data for these samples were retrieved from whole-genome sequencing analyses performed by the 1000 Genomes Project Consortium 14 . We used QTLtools cis (V1.1) 17 module with nominal pass, gene expression matrixes and, as set of covariates, we used the first 3 PCs derived from genotype data and the first 20 PCs derived from gene-normalized expression values.
TSS annotation. The CAGE peaks retrieved from the FANTOM promoter atlas were annotated using the GENCODE-V19 20 transcripts reference set and the ChromHMM 21 segmentation based on ENCODE ChIP-seq histone marks from the LCL GM12878. Our hierarchical annotation procedure has four steps. First, FANTOM CAGE peaks within 500 nt upstream of annotated TSSs or residing within a 5′-UTR first exon or a 5′-UTR first intron were annotated as "Annotated genes". The other CAGE peaks were annotated as "Not annotated transcript" and further categorized in "enhancer," "promoter," or "other" based on epigenetic features. Finally, the "promoter" was subdivided into "Anti-sense promoter" and "Putative promoter" based on genomic localization.
Enhancer activity quantification. Enhancer regions transcriptionally active in our cohort of LCLs were selected with a procedure similar to the approach detailed in the FANTOM enhancer atlas 28, where they detected enhancers based on balanced bidirectional transcriptional hallmarks 22 . First, 63,991 autosomal enhancer regions were retrieved in May 2016 from the FANTOM atlas (http://fantom.gsc.riken.jp/5/data/). Enhancer elements characterized with bidirectional transcription patterns in our samples were selected. To this end, we first produced CAGE tag clusters using the Paraclu method (V9) 55 , with CAGE tag 5′ genomic coordinates as input and (i) a minimum of five tags per cluster, (ii) a (maximum density)/(baseline density) ≥ 2 and (iii) a maximal cluster length of 200 nt. To select enhancer regions with a bidirectional transcriptional pattern, we required the overlap of two CAGE tag clusters on opposite DNA strands within a 400 nt window from the enhancer region center.
We then calculated normalized expression for both flanking 200 nt windows (F and R) to determine, for each enhancer region, a directionality score, D = (F−R)/ (F + R). Counts of CAGE tags per F and R windows were normalized for sequencing depth, converting tag counts to tags per million mapped reads (TPM) and, similarly to FANTOM 9 , TPM values were further normalized between samples using the RLE normalization procedure from edgeR 51 . We then filtered enhancer regions to have non-promoter-like directionality pattern, requiring |D| < 0.8. Finally, we summed the twice normalized expression of the 200 nt flanking windows to assign a single expression value to each enhancer, and discarded enhancers with null expression in more than 50 individuals. We built an expression matrix for 3558 enhancers for 154 individuals.
Data availability. The sequencing CAGE data generated in this study are available in the ArrayExpress database at EMBL-EBI (www.ebi.ac.uk/arrayexpress) under accession number E-MTAB-5835. Derived data supporting the findings of this study are available from the corresponding author on request. | 2018-04-03T00:34:02.832Z | 2017-11-07T00:00:00.000 | {
"year": 2017,
"sha1": "11eaaf32e75c920c27189f8340dd36f55379a05e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-01467-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6383914a13066867ba14d6a24734101ec5faeac",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236572966 | pes2o/s2orc | v3-fos-license | Cardiac Biomarkers in hypertensive disorders of pregnancy
In recent years, biomarkers have taken a central place in the assessment of cardiovascular diseases – from prediction to management and prognosis. On the other hand, enough evidence exists to assume that hypertensive disorders of pregnancy share a certain connection with cardiovascular diseases – from common risk factors and underlying mechanisms to the presence of a higher risk for women for the development of a great number of cardiovascular diseases, such as arterial hypertension, coronary atherosclerosis, stroke, peripheral artery disease, venous thromboembolism, and even a higher cardiovascular mortality. The key to a better understanding of the unfavorable cardiovascular profile of women with a hypertensive disorder of pregnancy may lie in their assessment with biomarkers, typically used in the field of cardiology. In this review, we have included studies investigating the use of cardiovascular biomarkers during or after a hypertensive pregnancy, namely, natriuretic peptides, high-sensitivity cardiac troponins, growth/differentiation factor 15 (GDF15), soluble suppression of tumorigenicity-2 (sST2), and galectin-3. Edited by: Igor Spiroski Citation: Gencheva D, Nikolov F, Uchikova E, Hristova K, Mihaylov R, Pencheva B. Cardiac Biomarkers in Pregnancies, Complicated by Hypertensive Disorders. Open Access Maced J Med Sci. 2021 Apr 16; 9F:137-144. https://doi.org/10.3889/oamjms.2021.5913
Introduction
Both cardiovascular diseases [1] and hypertensive disorders of pregnancy [2] are socially important entities leading to a considerable disability and mortality in people of active ages. The mechanisms for the development of hypertensive disorders of pregnancy are not completely understood, despite the considerable progress in the field. Poor placentation [3] is now believed to be the triggering event that results in a multisystemic response from the mother that can lead to devastating consequences, such as HELLP syndrome (hemolysis, elevated liver enzymes, and low platelets syndrome), pulmonary edema, renal failure, encephalopathy, eclampsia, disseminated intravascular coagulation, and others [4]. The pregnancy itself is at risk as fetal growth is deterred and premature birth, abruption of the placenta and fetal death are more likely to happen [5]. Although hypertensive disorders induced by pregnancy resolve with the delivery of the placenta or soon after, it is now known that the risk of numerous cardiovascular diseases -arterial hypertension, coronary atherosclerosis, stroke, heart failure, peripheral artery disease, and venous thromboembolism [6], [7] remains higher in those women years after the completion of the pregnancy and even lead to a higher cardiovascular mortality [8]. A very plausible theory -that of hypertensive disorders of pregnancy being a "stress test" [9] unmasking latent endothelial dysfunction and the higher risk for future occurrence of cardiovascular diseases in women, unifies those seemingly different pathologies. Reports of more pronounced changes in the structure and the function of the heart during such pregnancies are not uncommon, such as an increase of the left ventricular mass, dilation of chambers, features of diastolic dysfunction, and occasionally of systolic dysfunction [10].
Indeed, hypertensive disorders of pregnancy share some common pathophysiological mechanisms with cardiovascular diseases as well as risk factors. Preeclampsia is characterized by vasoconstriction and increases in the afterload as a result of a imbalance between vasodilators and vasoconstrictors -there is an increased sensitivity to angiotensin II and less production of nitric oxide [11]. A similar imbalance has been implicated in the mediation of chronic heart failure -the end stage of all severe cardiovascular diseases, where it leads to progressive remodeling and cardiac dysfunction https://oamjms.eu/index.php/mjms/index [12]. Oxidative stress and endothelial dysfunction are also common denominators between those groups of diseases [13]. In hypertensive disorders of pregnancy, there is an exaggerated activation of inflammation as evidenced by the presence of elevated levels of proinflammatory cytokines [14] and low-grade inflammation is also known to accelerate the progression of atherosclerosis and is connected to poor cardiovascular outcomes [15]. Hypertensive disorders of pregnancy are also associated with an unfavorable lipid profile [16] and with insulin resistance [17] -both of which are risk factors for the development of cardiovascular diseases. Besides the obstetric risk factors, hypertensive disorders of pregnancy are more common in women presenting with typical cardiovascular risk factors, such as obesity, diabetes, pre-existing arterial hypertension, and thrombophilias [18].
Modern-day cardiology makes use of biomarkers, both in acute and chronic situations, as they provide additional information on mechanisms of disease development, facilitate treatment, and define prognosis. The abovementioned poses the question of whether cardiac biomarkers could also be applied successfully in the assessment of women during hypertensive pregnancies. Proper characterization of women during this natural "stress test" could allow for a better understanding of the function of the heart and the vascular system, and therefore provide a more precise risk stratification.
In this review, we aim to summarize the existing evidence of the presence of adverse cardiac biomarker profile in women during or after a hypertensive pregnancy when compared to normotensive pregnancies, and if available to provide information on correlations between the levels of those biomarkers and the clinicopathological characteristics of the women. We have included the following biomarkers -natriuretic peptides, highsensitivity cardiac troponins, growth/differentiation factor 15 (GDF15), the suppression of tumorigenicity-2 (ST2), and galectin-3. Biomarkers with a primary role in the development of preeclampsia such as placental growth factor (PlGF), soluble FMS-like tyrosine kinase 1 (sFlt-1), and vascular endothelial growth factor (VEGF) are not an object of this review, although they are known to have certain uses in the field of cardiology.
Natriuretic peptides (BNP and NT-proBNP)
The family of natriuretic peptides consists of A-, B-, and C-type natriuretic peptides. The prohormone proBNP is secreted mostly by the ventricular cardiomyocytes as a response to the stretching of the myocardial wall due to elevated pressures and then its molecule is halved into the biologically active BNP and the inactive NT-proBNP. BNP stimulates diuresis, natriuresis, and vascular vasodilation and antagonizes the two systems mediating heart failurethe renin-angiotensin-aldosterone system and the sympathetic nervous system [19]. The use of either BNP or NT-proBNP is recommended by the current heart failure documents of the American College of Cardiology Foundation (ACCF) and the American Heart Association (AHA) for peptides for establishing the diagnosis of heart failure in patients with dyspnea, as well as for prognostic purposes in chronic and acutely decompensated cases with the highest class of recommendation I and level of evidence A. With a lower strength of recommendation and evidence, they can be used for post-discharge prognosis in chronic heart failure and in patients at risk of developing heart failure (IIA, B-N and IIA, B-NR) [20]. It is also worth noting that BNP/NT-proBNP is so far the only biomarkers recommended by the European Society of Cardiology in the setting of heart failure [21].
Higher levels of NT-proBNP and BNP also correlate with a higher all-cause mortality in heart failure patients with preserved and reduced ejection fraction in the 6 months after hospital discharge [22]. BNP is also known to correlate with the degree of the left ventricular diastolic dysfunction, right ventricular systolic dysfunction [23], and also with a larger left ventricular end-systolic and end-diastolic diameter, left atrial diameter, and the degree of mitral insufficiency and lower left ventricular ejection fraction in the setting of acute heart failure, as measured echocardiographically [24]. In a study by Tschöpe et al., the levels of NT-proBNP correlated with invasively measured parameters of diastolic dysfunction -left ventricle end-diastolic pressure, dP/dt, Tau, and Pulmonary capillary wedge pressure rest and during exercise; as well as with the NYHA functional class of heart failure [25].
In hypertensive disorders of pregnancy
Elevated levels of natriuretic peptides in hypertensive diseases of pregnancy could be a defense mechanism directed against vasoconstriction, known to happen in those conditions. In a systematic review, Afshani et al. [26] selected 12 studies that examined the relationship between BNP levels and preeclampsia, eclampsia, and preterm delivery. The data suggested that levels of BNP remained unaltered in normal pregnancy, but were elevated in the third trimester of preeclamptic women and remained so 3-6 months after the end of the pregnancy. The authors stated that high levels of BNP could be an indicator of cardiovascular complications and preterm delivery, but more investigation on the topic is needed. There was no association between natriuretic peptides and HELLP syndrome and no association with the progression to eclampsia.
Another meta-analysis [27], incorporating data from three studies about BNP, concluded that BNP levels were also elevated when comparing severe to mild forms of preeclampsia. A different study discovered higher levels of BNP in early-onset preeclampsia compared to late-onset preeclampsia [28]. Few studies exist that directly examine the association between the levels of natriuretic peptides and echocardiographic findings in hypertensive pregnancies. A study by Naidoo et al. additionally to the increased pre-delivery value of NT-proBNP in preeclamptic women discovered weak positive correlations between NT-proBNP and the tissue Doppler Ea of the mother and the resistance index of the umbilical artery [29]. The serum BNP correlated negatively with ejection fraction and TAPSE and positively with E/Em ratio in the severe preeclampsia group of a 2019 study, but the levels themselves were not increased after adjustment for maternal and gestational age when severe preeclampsia was compared to the controls [30]. Tihtonen et al. found a positive correlation between NT-proBNP and systemic vascular resistance index and an inverse one with the cardiac index in preeclamptic women as assessed by whole-body impedance cardiography [31].
As far as gestational hypertension is concerned, in 2016, Sadlecki et al. reported that serum NT-proBNP and BMI were independent predictors for the occurrence of gestational hypertension in multivariate logistic regression analysis. They were also indicative for the presence of preeclampsia and correlated inversely with birth weight. The authors suggested the use of NT-proBNP for a better identification and management of such women; however, one limitation of the study was the small sample size (14 women with PE and 26 with GH) [32].
Cardiac troponins
Troponin I and troponin T are muscle proteins that regulate muscle contraction and are highly specific and sensitive for the detection of myocardial injury. Elevated troponin I or T levels are a necessary criterion for the establishment of the diagnosis of myocardial infarction as per the fourth universal definition of myocardial infarction [33] and high-sensitivity assays are useful for the rule-out of myocardial injury in the acute clinical setting. Higher levels are also known to correlate to the size of the myocardial infarction and to predict worse outcomes in acute coronary syndromes [34].
High-sensitivity cardiac troponins are associated with the incidence of coronary heart disease, fatal coronary heart disease, total mortality, and heart failure [35]. The 2017 update of the ACC/ AHA/HFSA heart failure guideline recommends the use of high-sensitivity cardiac troponin as a marker of cardiac fibrosis for the prediction of hospitalization and death in heart failure patients, although with a lower class and evidence of recommendation (IIB, B-NR), when compared to the natriuretic peptides [20].
In hypertensive disorders of pregnancy
There are conflicting reports on whether hypertensive pregnancies lead to higher levels of cardiac troponins or not. A systematic review of nine studies of up to 2015 involving 719 women, found that five of the studies indicated significantly higher levels of troponin I in preeclampsia, but the other four did not; and additionally, the authors of the review criticized the lack of consecutive measurements [36]. Fleming et al. found higher levels of troponin I in the preeclampsia group compared to the gestational hypertension one in their study [37]. A 2018 study found elevated high-sensitivity troponin I in 25% of the women with preeclampsia and also a significant linear relationship between troponin and mean arterial pressure [38]. In a relatively large prospective study, high-sensitivity cardiac troponin I was an independent predictor of gestational hypertension and preeclampsia during pregnancy and after delivery -with an odds ratio of 9.3 in unadjusted and 11.5 in adjusted models per doubling of its concentrations [39].
Conversely, the authors of a study that did not confirm elevated troponin I in preeclamptic pregnancies advised the exclusion of other reasons for myocardial injury in women with elevated cardiac troponins [40].
Umazume et al. made serial measurement of high-sensitivity troponin I accompanied by echocardiographic assessment of women with hypertensive disorders of pregnancy and found that the serum levels correlated negatively with the maternal e-wave in the third trimester and 1 month postpartum. The authors found an area under the curve of 0.82 and 0.81, respectively, for the prediction of reduced left ventricular relaxation in those periods [41]. Muijsers et al. measured high-sensitivity troponin I levels 9-10 years after an early-onset preeclamptic pregnancy and while there was no difference compared to women with a normotensive pregnancy in that time period, currently, hypertensive women with a history of early-onset preeclampsia had higher levels than normotensive women with early-onset preeclampsia. Higher troponin levels were also associated with a higher blood pressure [42].
Soluble suppression of tumorigenicity-2 (sST2)
A relatively novel biomarker in the cardiovascular field, the suppression of tumorigenicity-2 (ST2) is a member of the interleukin-1 family, with a circulatory form -soluble ST2 (sST2) that binds to IL-33 and thus promotes inflammation, hypertrophy, fibrosis, and ventricular dysfunction [43]. The biomarker is secreted by the cardiac myocytes and fibroblasts and is increased under mechanical stress [44]. The biomarker has a promising role in the prognosis and management of heart failure [45], Eisenmenger's syndrome [46], and major adverse cardiac events and is also associated with the complexity of coronary atherosclerotic lesions [47].
The current ACC/AHA/HFSA update on the Heart Failure Guideline also recommends its use in https://oamjms.eu/index.php/mjms/index heart failure as it could provide additional prognostic information and risk stratification to the use of natriuretic peptides with a class of recommendation IIB, B-NR [20]. In a direct comparison study between sST2, galectin-3, and high-sensitivity troponin T in chronic heart failure patients with reduced ejection fraction, out of the three biomarkers, only the serial measurements of sST2 predicted reserve myocardial modeling and independently added to the risk model for adverse cardiovascular events [48]. In patients with heart failure with preserved ejection fraction, sST2 levels were associated with the presence of diabetes mellitus, atrial fibrillation, systemic congestion, and kidney failure. It also correlated with worse exercise tolerance and higher NT-proBNP levels, C-reactive protein, and highsensitivity troponin [49].
In hypertensive disorders of pregnancy
Soluble ST2 has also been examined in the setting of preeclampsia, although large studies are not available. In a longitudinal study of 160 uncomplicated pregnancies and 40 preeclamptic ones, maternal plasma sST2 concentrations were elevated 6 weeks before the clinical presentation of preeclampsia, which the authors attributed to an exaggerated inflammatory response or a misbalance between humoral and cellular immunity [50]. Higher concentrations of sST2 were found in women with preeclampsia compared to normotensive pregnancies and the difference was more pronounced in early onset and in severe preeclampsia. There were no correlations with the uterine or umbilical Doppler findings. In addition, there was a negative correlation with the placental growth factor -a pro-angiogenic factor which is necessary for proper placentation and is pathologically reduced in preeclampsia [51]. In a recent, relatively, but small study sST2 levels were elevated in 24 women with severe preeclampsia 24 h before delivery, but not 1 year afterward, compared to healthy pregnant controls and additionally in the preeclampsia group, there was an inverse correlation with echocardiographic markers of the left ventricular diastolic function, but not with the systolic ones [52].
Galectin-3
Galectin-3 is a protein with an established role in inflammation, immunity, and oncogenesis. It is secreted by the activated macrophages and its main function is associated with the activation of the fibroblasts that form collagen and fibrotic tissue [53]. Experimental animal studies prove its role in cardiac remodeling as a result of pressure overload [54]. In human studies, its upregulation has been established in patients with the left ventricular hypertrophy [53] and heart failure [55]. Its levels correlated with the number of hospitalizations for heart failure [56] and were also elevated in pulmonary hypertension, where they correlated with the prognosis, regardless of etiology [57]. In a 2016 study, there was a significant correlation between galectin-3 levels and the thickness of the ventricular septum, the left ventricular posterior wall, and the left ventricular mass in arterial hypertension. In the same study, its levels were elevated even in newly diagnosed hypertensive patients [58]. In a non-pregnant population, a negative correlation was discovered between galectin-3 levels and some parameters of the right ventricular function -TAPSE and the tricuspid S-wave, in patients with reduced left ventricular function, but the levels were not associated with the left-sided parameters themselves [59]. The recommendation of its use for additional risk stratification by the ACC/AHA/HFSA in chronic heart failure is IIB, B-NR, similarly to that of high-sensitivity troponin and sST2 [20].
In hypertensive disorders of pregnancy
Galectin-3 is not much studied in pregnancyinduced hypertension or its complications. However, we managed to identify a couple of relevant studies. In a 2019 study by Taha et al. [60], the Galectin-3 levels were significantly higher in preeclamptic women and additionally indicated a worse lipid profile -they showed a positive correlation with the total, VLDL, LDL cholesterol, and triglycerides and a negative one with HDL cholesterol. A positive correlation was also present with maternal and gestational age of the women. Jeschke et al. [61] found an upregulation of both galectin-1 and galectin-3 in the extravillous trophoblast of eight preeclamptic placentas and five placentas of women with HELLP syndrome when compared to placentas of healthy women. In another study, there was also a higher galectin-3 expression in the umbilical cord of small-for-gestational-age infants when compared to appropriate-for-gestational-age ones [62].
Growth/differentiation factor 15 (GDF15)
GDF-15 is a protein from the transforming growth factor-β superfamily that is normally secreted in small quantities by many organs, but in much higher concentrations from the placenta in pregnancy. It is involved in apoptosis, inflammation, oncogenesis, and the metabolism, but its role in pregnancy is not entirely clear [63]. It is now known to participate in cardiac ischemia [64] and in the formation of the atherosclerotic plaque [65]. High concentrations are present in heart failure and in different forms of coronary artery disease and are related to the progression of the disease, ventricular remodeling, plaque burden, and the severity of ischemia [66]. A large study proved that it can be used in the prediction of all-cause, cardiovascular, and noncardiovascular mortality with better predictive abilities for all-cause mortality than the popular predictors NT-proBNP and the C-reactive protein [67].
In hypertensive disorders of pregnancy
In normal pregnancy, serum levels of GDF-15 were elevated with the progression of the pregnancy, but were reduced in the third trimester in 34 women with preeclampsia, especially if the preeclampsia was late onset. It, however, could not be used as an early prediction (11-13 gestational week), as the levels did not differ in women who later on developed preeclampsia [68]. Results from other studies, however, showed the opposite -higher concentrations in hypertensive pregnancies. Sugulle et al. [69] examined GDF-15 concentrations in preeclampsia to test the hypothesis that the placental oxidative stress is causing elevation of its levels. According to their results, maternal serum GDF-15 concentrations were higher in preeclamptic pregnancies at term compared to controls, and additionally, levels were elevated in the fetal circulation and the amniotic fluid in cases of preeclampsia and superimposed preeclampsia. The placental GDF-15 mRNBA was also elevated in preeclampsia. The authors viewed their finding as a confirmation of the presence of oxidative stress and ischemia in preeclamptic pregnancies. In the same study, the levels were also higher in women with diabetes mellitus. Another group of authors [70] also found significantly elevated concentrations in preeclampsia and the highest levels of GDF-15 were in its early-onset forms. There was a positive correlation with the systolic and diastolic blood pressure and a negative one with the gestational age of delivery and the birth weight. Those results were explained with the production of GDF-15 in the setting of cytokine-mediated endothelial injury.
Conclusion
The use of cardiac biomarkers in pregnancy is unfortunately not a widely researched area, but a progress in it could lead to a better understanding of the mechanisms behind the development of hypertensive disorders of pregnancy. In addition, research could provide an explanation for the higher cardiovascular risk in affected women both during and years after a hypertensive pregnancy. The growing evidence of cardiac biomarkers being altered during hypertensive pregnancies necessitates a more thorough cardiologic assessment of affected women and could be promising for risk stratification for future cardiovascular events. More pronounced changes in the biomarker levels could indicate a worse cardiovascular profile. The identification of such women can facilitate health-care provision and prophylaxis, therefore improving the management of women's health issues. The information on the topic, however, remains somewhat scarce, especially when correlations of biomarkers with cardiac structural or functional changes are sought after. Larger studies are needed, especially with more information about gestational hypertension as this group of women tends to be underexamined when compared to preeclampsia, while the risk of further cardiovascular complications is far from negligible. | 2021-08-02T00:06:55.776Z | 2021-04-16T00:00:00.000 | {
"year": 2021,
"sha1": "50b51920f70cb4677be3ef3ed1732da89d36e4a7",
"oa_license": "CCBYNC",
"oa_url": "https://oamjms.eu/index.php/mjms/article/download/5913/5598",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "95e7536b2e0e034e45df205c2b3f38c9cc5abe57",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237326843 | pes2o/s2orc | v3-fos-license | Gender Perspective in Dual Diagnosis
Little data are available for women diagnosed with a dual diagnosis. However, dual diagnosis in women presents increased stigma, social penalties, and barriers to access to treatment than it does for men. Indeed, it increases the probability of suffering physical or sexual abuse, violent victimization, gender-based violence, unemployment, social exclusion, social-role problems, and physical and psychiatric comorbidities. Thus, a transversal sex and gender-based perspective is required to adequately study and treat dual diagnosis. For this, sex and gender factors should be included in every scientific analysis; professionals should review their own prejudices and stereotypes and train themselves specifically from a gender perspective; administrations should design and provide specific treatment resources for women; and we could all contribute to a structural social transformation that goes beyond gender mandates and norms and reduces the risk of abuse and violence inflicted on women.
Little data are available for women diagnosed with a dual diagnosis [1,2], to the extent that we do not even have clear data regarding the prevalence of dual diagnosis in the general female population. Available data indicates that dual diagnosis in women presents increased stigma [3,4], social penalties [3], and barriers to access to treatment [4,5] than it does for men; as gendered assumptions about appropriate behaviour for women (for instance acting as a wife and/or mother), societal disapproval of women's use of substances, and the risk of losing their relationships may prevent help-seeking [4]. Indeed, it increases the probability of suffering physical or sexual abuse [2,6,7], violent victimization [8], gender-based violence [9], unemployment [4,10], social exclusion [10], social-role problems such as fulfilling family and work obligations [4], and physical [2,4,11] and psychiatric comorbidities [4,12].
Basic, preclinical, and clinical research has shown the presence of biological differences between the sexes from the beginning of embryonic development and throughout the entire life cycle. This dimorphism affects health, protective or vulnerability factors, social and relational life, the search for treatment, and responses to therapeutic interventions. There is evidence for genetic differences in stress-related effects, known to often mediate or modulate sex differences in addiction-related behaviours [13]. Differences have been detected in brain areas involved in craving, addiction, and relapse: the cerebral cortex (females showing a larger extent of cortical neuropil and lower neuronal numbers), the medial amygdala (approximately 20% smaller in females), and the caudate putamen and hippocampus (larger in females than in males) [14].
Animal studies have revealed sex-dependent differences: females and males differ for motivation to obtain a specific drug, levels of drug intake, or the propensity to reinitiate drug-seeking behaviour following a period of abstinence [14]. The estrous cycle is key in differences in reward and craving for drug [14].
Adult women have more gray matter in the medial prefrontal cortex (important for regulating executive function), while males have more gray matter in the anterior cingulate cortex (involved in hedonic and impulsive activity), what could lead to sex differences in the cycle of substance use disorders, including maintenance and relapse [15]. Estradiol would exacerbate drug use by increasing reinforcing effects [16] and sex differences in stress circuitry may explain sex difference in risk for comorbid alcoholism and stress-related disorders [17]. In several substances, women take less time to progress to dependence than men, although it is not clear whether the menstrual cycle has a similar effect to the estrous cycle in increasing motivation to self-administer substances [14]. Regarding alcohol use disorder, sex differences have been found in tryptophan metabolism [18]. Density and regional distribution of µ-opioid receptors vary between the sexes and, in females, across the ovarian cycle, while women have a higher number of D2-like receptors in the frontal cortex than men [14]. Besides, sex differences in the effect of drugs of abuse might be due, at least in part, to differences in muscle mass and fat tissue distribution between women and men, as well as in the gastric emptying time, which undergoes significant changes during the menstrual cycle [14].
However, studies on dual diagnosis continue to be carried out mainly in male patients [14] or focus only on differences between the sexes. Except for worthy exceptions e.g.: [19,20], we have little knowledge of the specific characteristics and needs of women, which in turn contributes to the androcentric design of interventions, resources, and treatment services. For this reason, women with dual diagnosis are reluctant to attend services at which they feel judged or, for example, feel they might risk losing their children [3,4].
Furthermore, female gender roles can act to precipitate dual diagnosis. Being a woman increases the probability of traumatic experiences such as abuse [7] and gender-based violence [21], which can hinder the development of adaptive coping strategies and can produce biological changes that themselves constitute vulnerability factors for substance use and mental illness [22]. In turn, substance use and mental illness are risk factors for further abuse and trauma [6,7], thus perpetuating the cycle of victimization. In parallel, domestic violence in childhood increases the risk of abuse in future relationships [9], maybe shaping girls into adopting behaviours typical of traditional female roles, which can foster emotional and economic dependence. This dependence also increases the risk of being initiated or induced into substance use by a partner [23], as women may abuse substances in an attempt to build or maintain relationships [4]. Moreover, a lack of adaptive coping skills (or the inability to put them into practice) is related to a reason for substance consumption in women: the desire to reduce emotional distress [20]. Therefore, women are more likely to self-medicate or use substances to deal with stress or pain [4]. Women are more likely to drink to regulate negative affect and stress reactivity, while men may be more likely to drink for positive reinforcement [15].
Thus, a transversal sex and gender-based perspective is required to adequately study and treat dual diagnosis. For this, sex and gender factors (e.g., caregiver role and unpaid work) should be included in every scientific analysis; professionals should review their own prejudices and stereotypes and train themselves specifically from a gender perspective; administrations should design and provide specific treatment resources for women; and we could all contribute to a structural social transformation that goes beyond gender mandates and norms and reduces the risk of abuse and violence inflicted on women.
National Institutes of Health required the inclusion of women in clinical research in 1993 and sex as a biological variable in basic and preclinical research in 2016 [15]. Research should distinguish biological sex from gender [24]; identify the precise role of hormones; and go deeper into the differential effects and health consequences that abused drugs may induce in women and men, the gender specific factors triggering drug use, the medical problem of drug use in pregnant women, and which environmental and sociocultural risk factors may contribute to drug abuse and relapse in women and men [14]. Studies should include careful task design, multiple levels of analysis and appropriate statistical modelling of sex considering a priori study hypotheses [25].
The diversity of user experiences (based on sex, gender, ethnicity, socio-economic level, ability, age, sexual orientation, etc.) should be recognized [4] and cared for. Women with dual diagnosis ask us to consider them, listen to their voices, and to create and adapt resources according to their specific needs and characteristics. Resources with conciliation, social integration, and autonomy promotion measures; and without language or sexist content are needed [3]. Treatment should broach gender stereotypes, acknowledge discrimination, and advocate for equality [4]. Comprehensive treatment devices for mental illness, addiction, and gender violence; with groups only for women [12]. Treatment related to family and trauma issues, with strategies specifically focused on reducing risk of abuse and coping with trauma [2]. Agencies should provide concrete assistance in prenatal health care, parenting education, and childcare; and residential programs should offer live-in options for children [4]. Programs incorporating childcare, parenting classes, job skills or employment enhancement, and specialized mental health treatment for trauma and comorbid mental illness [4]. They want services that meet their real needs, that stop blaming them, and that show them trust and the ability to listen, understand, and recognise their issues. Only in this way will women be able to sufficiently rebuild their identities and self-esteem and overcome the triple stigma of being female, mentally ill, and addicted. | 2021-08-28T06:17:17.314Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "eeb8ba27219b99e95e46b4bb09048c7ba9663092",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/11/8/1101/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31dd489e3fd784abb1909976678267695d71453f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250666962 | pes2o/s2orc | v3-fos-license | Upgrade of the Global Muon Trigger for the CMS experiment
To continue triggering with the current performance in the Large Hadron Collider's Run 2, the Global Muon Trigger (GMT) of the Compact Muon Solenoid experiment will have to undergo a significant upgrade. In this upgrade the GMT will be reimplemented in a Xilinx Virtex-7 card utilizing the microTCA architecture. The available high-capacity input and output will be used to increase the number of sent and received muon objects. Furthermore their data size will increase from currently 32 bits to 64 bits per object. Additionally the GMT will calculate a muon's isolation using energy information received from the calorimeter trigger. This information is added to the muon objects forwarded to the Global Trigger. It may also be possible to migrate the final sorting stage of each muon subsystem to the GMT using the increased bandwidth and processing power. A summary of the current status of the future GMT's development will be given.
Current system
The current Compact Muon Solenoid (CMS) Level-1 Trigger is based on VME technology utilizing mainly Field-Programmable Gate-Arrays (FPGAs) as well as Application-Specific Integrated Circuits with galvanic links for inter-card communication. It reduces the event-rate from the nominal LHC bunch-crossing frequency of 40 MHz to 100 kHz. In order to do this the Level-1 Trigger is synchronized to the Large Hadron Collider (LHC) clock of 40 MHz and works in a fully pipelined mode.
The general operating principle is to find local features of physics objects in early stages and successively combine these to regional physics objects whereupon they are sorted in a global stage before being sent to the Global Trigger (GT). The GT then can trigger a read-out decision based on 128 programmable algorithms. These algorithms work on full physics objects such as muons and jets and can include topological conditions. This principle may be found in the muon trigger (see figure 1): Hit information from two of the muon systems in CMS (cathode strip chambers (CSC) and drift tubes (DT)) are sent to a local stage where they are combined to track stubs within a single muon station. These track stubs are used to form muon tracks in the track-finder level and subsequently sorted. The four best tracks found in both the CSC and DT systems are then sent to the Global Muon Trigger (GMT). The resistive plate chamber (RPC) system uses a pattern-matching based approach. It then sends the four highest-ranked muons for both the barrel and endcap region to the GMT.
-1 - The GMT matches the muon tracks from complementary muon systems. RPC tracks are merged with matching CSC or DT tracks according to programmable algorithms that find an improved track based on the two original tracks. Due to the geometry of the CMS detector a muon can be reconstructed in both the barrel as well as the endcap muon systems. The GMT finds such tracks and then cancels the duplicate. The resulting muons are sorted in two stages before the four highest-ranked muons are sent to the GT. The current GMT is described in more detail elsewhere, see ref. [1].
The GT can then combine the muon tracks with information received from the calorimeter trigger in 128 algorithms of which each can trigger a read-out decision.
A complete description of the Level-1 Trigger is provided in the Technical Design Report, see ref. [2].
Motivation for the upgrade
The LHC's expected instantaneous luminosity after Long Shutdown 1 exceeds the original design specification. The number of pile-up interactions already surpassed it in the 2012 run.
Still, the Level-1 Trigger will be required to support a physics program that both allows searches at the TeV scale and is sensitive to electroweak scale physics. This cannot be achieved by increasing the Level-1 accept rate as several detectors would require major upgrades in order to accommodate the required read-out rate.
Especially the pile-up sensitive multi-object triggers will require significant increases in their trigger thresholds. This fact necessitates an improvement in several areas of the Level-1 Trigger. For the muon trigger this mainly means the introduction of muon isolation as well as an improvement in the muon parameter precision, especially for its transverse momentum.
Upgrades for the muon trigger
The track-finders will be upgraded both concerning their hardware as well as their firmware. This should lead to increases in the track-finders' precision for muon track parameters as well as greater flexibility for future improvements. They will also be able to absorb hit information from the RPC system in order to increase the quality of reconstructed tracks at an earlier stage than in the current system. Furthermore an additional track-finder will be introduced to cover the overlap region between the CSC and DT systems. The track-finders will then consist of a barrel, overlap and forward track-finder (see figure 2).
In order to accommodate the track-finders' increased precision, muon objects will be increased in size from currently 32 bit to 64 bit. This will also allow the Level-1 Trigger to move to a linear scale for the muon track parameters as well as remain flexible for possible later changes (see table 1).
Finally the number of muons delivered to the GT by the GMT will be increased from 4 to 8. This will allow greater flexibility for the trigger algorithms such as using lower quality muons for b-tagging.
A comprehensive description of the planned upgrades is given elsewhere, see ref. [3].
Hardware
For the Level-1 Trigger upgrade the current VME crates will be replaced by a microTCA crate system. This system provides system-level health management, redundant power supplies and cooling as well as a high-capacity back plane. The GMT will be implemented in a Xilinx Virtex-7 690 FPGA placed on an Advanced Mezzanine Card (AMC). This means a significant increase in resources compared to the currently -3 - used chips. However, the current GMT consists of 10 Virtex-II chips while the upgraded system will utilize only one chip. This means an increase in logic resources of only a factor 3 (see table 2). The Virtex-7 also includes digital-signal processors (DSPs) for fast integer addition and multiplication, as well as 80 transceivers capable of a maximum input and output bandwidth of 13.1 Gb/s each.
The current target card is the Imperial Master Processor, Virtex-7 (MP7, see ref. [4]). This AMC module supplies 72+72 10 Gb/s channels for receiving and sending via optical links. Multi-fiber Termination Push-on (MTP) connectors are used for I/O, one connector bundles 36 fibres.
Planned algorithms for the upgraded GMT
The future GMT will change significantly in its design when compared to the current system, as it will not be necessary to merge muons from the track-finders with those delivered by the RPC system. There will also be two overlapping regions between both barrel and overlap track-finders as well as overlap and forward track-finders instead of the current single overlapping region between CSC and DT track-finders. Finally the future GMT will compute the isolation of a muon based on the energy deposits in the calorimeter around the muon's track.
Muon sorting
With 72 high-bandwidth inputs available on the target card a significant increase in the number of input muons is possible. The GMT could absorb the track-finders' muons directly without using dedicated sorter cards, thereby saving latency otherwise required for de-/serialization at the optical transceivers. As each track-finder consists of 12 processor boards, each with an output of 3 muons, the GMT then would absorb 108 input muons at 64 bits. Apart from the latency savings this will allow to remove ghosts between the track-finders at an earlier stage in the processing chain.
Muon isolation
The GMT will calculate an isolation variable for each muon sent to the GT. This will allow algorithms to be used that ignore muons created in jets from in-flight decay. This value can be either calculated using a fixed threshold for the energy deposited around the muon's track (absolute isolation) or using the ratio between the energy deposited and the muon's transverse momentum (relative isolation). Current studies indicate that at a few percent efficiency cost a rate reduction on the order of 35% can be achieved, assuming an improved p T resolution. This is estimated using p T values obtained by the high-level trigger (HLT) using only muon system information, i.e. an estimate for the best possible performance for the Level-1 muon trigger.
6 Implementation 6.1 Muon sorting As explained in section 5.1 the future GMT could absorb the current final sorters. In such a scenario muon sorting will be accomplished in two stages (see figure 3). In the first stage the muons from each track-finder will be sorted separately. Muons from the overlap and forward track-finders will be sorted separately for the positive and negative sides of the detector. Sorting will be done according to a rank assigned depending on p T and the quality of a muon as given by the trackfinders. In parallel the muons from each track-finder are matched in order to find possible ghosts (see section 6.2). The duplicate muon with the lower quality will then be canceled out in the sorter.
The second sorter stage receives four muons each from the positive and negative regions of both the overlap and forward track-finders, as well as eight muons from the barrel track-finder. The best eight muons out of these 24 are then sent to the GT after being assigned isolation values that are supplied by the isolation unit.
Ghost busting
Ghost busting in the upgraded muon trigger will be necessary between the different track-finders, but also for muons from neighbouring sectors or wedges in the same track-finder. In the current system this cancel-out is either not necessary as no information is shared between neighbouring stations (as for the CSC track-finder) or done already in the barrel sorter in the case of the DT track-finder.
The future GMT thus needs to perform ghost busting between the following areas: • barrel and positive overlap track-finders • barrel and negative overlap track-finders • positive overlap and positive forward track-finders • negative overlap and negative forward track-finders • neighbouring wedges or sectors of each track-finder Tracks can be matched either based on their spatial coordinates or based on common track segments used during the tracks' assembly.
-5 - Figure 3. Block diagram of current version for upgraded GMT functionality. Latency according to software simulations. The GMT sorts input muons from the positive and negative sides of the detector in the endcap and overlap region as well as the input muons from the barrel region separately. In parallel ghost-busting takes place as well as calculation of pile-up in the calorimeter and an extrapolation for each muon to the vertex. In a second sorter stage muons from all detector regions are sorted again and the eight best are sent to to the isolation unit in order to determine their isolation. Finally these eight muons and their isolation values are sent to the GT.
Coordinate-based cancel out. Matching based on spatial coordinates uses a matching window ∆R 2 = ∆η 2 + ∆φ 2 . A match quality value is assigned to each pair of muons depending on ∆R 2 . Additional coefficients can be introduced for both ∆η and ∆φ . This type of matching requires no additional information from the track-finders, however it is less accurate than the track-address based cancel out described below.
-6 -Track address-based cancel out. Ghost busting using the track addresses works by matching the track segments used for a muon's track for each station. If a certain number of shared track segments were used for two tracks they are flagged as duplicates of each other. This is more accurate than matching based on spatial coordinates and is currently used in the DT barrel sorter. However, significantly more information is required to perform this kind of ghost busting.
Muon isolation
The upgraded GMT can receive 5 bit energy values for 2×2 trigger tower 1 regions from the Layer-2 calorimeter trigger with the available bandwidth. An isolation algorithm similar to the to the one currently being studied (see section 5.2) has been written and synthesised successfully. The algorithm pre-calculates 5×1 sums of the 2×2 regions. In parallel muon tracks are extrapolated to the vertex. The final muons selected by the last sorter stage are then used to select the 5×1 sums to be used for 5×5 sums around the muon tracks. Finally the isolation value is determined based on the calculated 5×5 sums as well as the muon's p T in the case of relative isolation.
Pile-up subtraction will be performed in the Calorimeter trigger before sending the energy sums to the GMT. However it may be beneficial to provide this functionality in the GMT if an improved algorithm can be found for pile-up removal when isolating muons.
Interface
The trigger systems will communicate via optical links.
GT. The GMT will send eight muons at 64 bits to the GT. One 10 Gb/s link will transfer two muons.
Track-finders. Each track-finder sends 36 muons at 64 bits. One 10 Gb/s link per track-finder processor will transfer three muons to the GMT.
Calorimeter trigger. The calorimeter trigger will send 5-bit energy values for each 2×2 trigger tower region to the GMT. Due to the time-multiplexed nature (see ref. [5]) of the calorimeter trigger there are 3 links possible per Layer-2 processor board. This means that the energy values will be transmitted over a period of 10 BX.
Summary
An overview of the current developments for the upgraded CMS Global Muon Trigger has been given. Due to the significantly increased luminosity that will be provided by the LHC during Run 2 an upgrade of the CMS Level-1 Trigger is necessary. In the context of this upgrade the GMT will be reimplemented in a Virtex-7 FPGA using the microTCA crate technology. The increased input and processing capabilities will be used to provide muon isolation as well as increase both the number and precision of muon objects. Muon sorting in the GMT could potentially save latency as well as allow improved ghost busting. | 2022-06-28T04:47:37.286Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "fec2cd46ac14a3f7d926ef740365607cd6b8dc8c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-0221/8/12/c12017",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "fec2cd46ac14a3f7d926ef740365607cd6b8dc8c",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
237581134 | pes2o/s2orc | v3-fos-license | Survey: Transformer based Video-Language Pre-training
Inspired by the success of transformer-based pre-training methods on natural language tasks and further computer vision tasks, researchers have begun to apply transformer to video processing. This survey aims to give a comprehensive overview on transformer-based pre-training methods for Video-Language learning. We first briefly introduce the transformer tructure as the background knowledge, including attention mechanism, position encoding etc. We then describe the typical paradigm of pre-training&fine-tuning on Video-Language processing in terms of proxy tasks, downstream tasks and commonly used video datasets. Next, we categorize transformer models into Single-Stream and Multi-Stream structures, highlight their innovations and compare their performances. Finally, we analyze and discuss the current challenges and possible future research directions for Video-Language pre-training.
Introduction
Transformer networks (Vaswani et al. 2017) have shown their great advantage on performance and become popular in Deep Learning (DL). Compared to traditional deep learning networks such as Multi-Layer Perceptrons (MLP), Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), transformer is more suitable for pretraining & finetuing, because its network structure is easy to deepen and its smaller model bias. The typical pre-training & fine-tuing paradigm is that the model is first trained on a large amount of (typically self-supervised) training data and then fine-tuned on smaller (typically task specific) datasets for the downstream tasks. The pre-training stage helps the model to learn the universal representation, which benefits downstream tasks.
Transformer based pre-training method was first proposed for Natural Language Processing (NLP) tasks and achieved remarkable performance gains. For example, Vaswani et al. (Vaswani et al. 2017) firstly propose the transformer structure with self-attention mechanism for machine translation and English constituency parsing tasks. BERT -Bidirectional Encoder Representations (Devlin et al. 2018) can be considered as a milestone in NLP, which adopts the transformer network for pre-training on unlabeled text corpus and achieves the state-of-the-art performance on 11 downstream tasks. GPT -Generative Pre-trained Transformer v1-3 (Radford and Narasimhan 2018; Radford et al. 2019;Brown et al. 2020) are designed as general language models with extended parameters and trained on extended training data, among which GPT-3 is trained on 45TB of compressed plain text data with 175 billion parameters. Inspired by the breakthrough of transformer based pre-training methods in the NLP field, researchers in computer vision (CV) have also applied transformers in varies tasks in recent years. For example, DETR (Carion et al. 2020) Video analysis and understanding is more challenging, because video naturally carries multi-modal information. For the representative Video-Language tasks such as video captioning (Das et al. 2013) and video retrieval (Xu et al. 2016), existing methods have mainly focused on learning video's semantic representation based on the video frame sequence and corresponding captions. In this paper, we focus on providing a comprehensive overview of the recent advances in transformer based pre-training methods for Video-Language processing, including commonly used metrics of corresponding benchmarks, taxonomy of existing model designs, and some further discussion. We hope to track the progress of this area and provide an introductory summary of related works for peer researchers, especially beginners.
The remainder of this paper is organized as follows: Section 2 introduces the related fundamental concepts, including standard transformer with self-attention mechanism, the paradigm of pre-training & finetuning approach, and commonly used datasets. Section 3 presents the major existing methods according to their model structures and highlights their strength and weakness as well. Section 4 further discusses several research directions and challenges, and Section 5 concludes the survey.
Transformer
Transformer (Vaswani et al. 2017) was first proposed in the field of Neural Language Processing (NLP) and showed great performance on various tasks (Wang et al. 2018;Rajpurkar et al. 2016;Zellers et al. 2018). It has been successfully applied in other fields ever since, from language (Devlin et al. 2018; W. Rae et al. 2020) to vision (Dosovitskiy et al. 2021).
As illustrated in Fig. 1, the standard transformer consists of several encoder blocks and decoder blocks. Each encoder block contains a self-attention layer and a feed forward layer, Figure 1: An overview of the standard transformer architecture. The whole transformer is composed of encoder module and decoder module, with several encoders and decoders stacked in each module respectively. Each encoder consists of a multi-head attention layer and a feed forward layer, while each decoder additionally contains a encoder-decoder attention layer. The multi-head attention mechanism is shown in the right most column, which transfers the input sequence into ℎ groups of {K, Q, V} and concatenates the self-attention outputs of each group as the final output.
while each decoder block contains an encoder-decoder attention layer in addition to the self-attention and feed forward layers.
Self-Attention
Self-attention is one of the core mechanisms of transformer, which exists in both encoder and decoder blocks. Taking a sequence of entity tokens = { 0 , 1 , ..., } as input (the entity tokens can be word sequence in NLP or video clips in the vision area), self-attention layer first linearly transforms the input tokens into three different vectors: key vector ∈ ℝ × , query vector ∈ ℝ × and value vector ∈ ℝ × (e.g. = = = 512 in practice). The output is produced via Att where ⋅ is to capture the relevance score between different entities, √ is to reduce the score for gradient stability, softmax operation is to normalize the result for probability distribution and finally, multiplying with V is to obtain the weighted value matrix.
In the decoder block, the encoder-decoder attention is similar to self-attention, with the key vector and the query vector from encoder module and the value vector from the output of the previous decoder block.
Note that not all self-attention attend to all entities. In the training stage of BERT (Devlin et al. 2018), 15% of the input tokens are randomly masked for prediction and the masked entities should not be attended. When using BERT to output the next word token in the downstream task of sentence generation, the self-attention layer of decoder block only attends to the previous generated entities. Such attention can be realized by a mask ∈ ℝ × , where the corresponding masked position of M is set zero. The formula of masked self-attention can be adjusted from the original self-attention to MaskedAtt( ) = softmax( ⋅ √ • ) × .
Multi-Head Attention
Multi-head attention mechanism (Vaswani et al. 2017) has been proposed to model the complex relationships of token entities from different aspects. To be specific, the input sequence is linear transformed into ℎ groups of { , , } ℎ−1 =0 , each group repeats the self-attention process. The final output is produced by projecting the concatenation of the outputs from the ℎ groups with a weight matrix ∈ ℝ ℎ × . The overall process can be described as:
Position Encoding
Different from CNNs (Lecun et al. 1998) or RNNs (Chung et al. 2014), self-attention lacks the ability to capture the order information of the sequence. To address this problem, position encoding (Vaswani et al. 2017) is added to the input embedding in both the encoder and decoder blocks. The position encoding of tokens are constructed as follows: where refers to the token's position and refers to the dimension. Another commonly used way to introduce position information is learned position embedding (Gehring et al. 2017). Experiments in (Vaswani et al. 2017) show that these two position encoding methods achieve similar performance.
Transformer Structure
The original Transformer (Vaswani et al. 2017) follows the encoder-decoder structure with stacks of 6 encoder blocks and 6 decoder blocks respectively. The encoder block consists of a multi-head self-attention sub-layer and a position-wise feed-forward sub-layer, where the positionwise feed-forward sub-layer contains two linear transformations with a ReLU activation. The decoder block additionally inserts a third sub-layer of encoder-decoder attention. What's more, residual connection and layer normalization is added to each single block for further performance promotion. All sub-layers in the model, as well as the embedding layers, produce outputs of dimension = 512, and the dimension of hidden layer is ℎ = 2048.
Compared with CNNs and RNNs, the major advantages of transformer are the ability to simultaneously capture global information and parallel computation. Furthermore, the concise and stackable architecture of transformer enables training on larger datasets, which promotes the development of pre-training & fine-tuning self-supervised learning.
Pre-training & Fine-tuning
Pre-training & Fine-tuning has become a typical learning paradigm for transformer based models: first pre-training model on large-scale dataset in supervised or unsupervised way and then adapting the pre-trained model on smaller datasets for specific downstream tasks via fine-tuning. Such paradigm can avoid training new models from scratch for different tasks or datasets. It has been proved that pretraining on larger datasets helps learning universal representations, which improves the performance of downstream tasks. For example, NLP Transformer model GPT (Radford and Narasimhan 2018) gains average 10% absolute improvement on 9 downstream benchmark datasets (e.g. CoLA (Warstadt et al. 2018), MRPC (Dolan and Brockett 2005)) after pre-training on BooksCorpus dataset (Zhu et al. 2015) with 7000 unpublished books. Vision Transformer model ViT-L/32 (Dosovitskiy et al. 2021) gains 13% absolute accuracy improvement on the test set of ImageNet (Deng et al. 2009) after pre-training on JFT-300M (Sun et al. 2017) with 300 million images.
Owing to the successful application of pre-trained models in NLP and CV tasks, more and more researches explore the cross-modal tasks, including Vision-Language and Video-Language. The main difference between Vision-Language tasks and Video-Language tasks is that the former focus on the image and text modalities such as language based image retrieval ) and image captioning (Vinyals et al. 2015), while the later focuses on the video and text modalities, which adds the temporal dimension over the image modality.
In following subsections, we describe the Pre-training & Fine-tuning methods in Video-Language field, including the commonly used proxy tasks and video-language downstream tasks.
Proxy Tasks
Proxy tasks are crucial for the final performance of pre-trained models as they directly determine the models' learning objectives. We classify the proxy tasks into three categories: Completion tasks, Matching tasks and Ordering tasks. 1) Completion tasks aim to reconstruct the masked tokens of input, which endow the model with the ability of building intra-modal or inter-modal relationships. Typical tasks include Masked Language Modeling (MLM), Masked Frame Modeling (MFM), Masked Token Modeling (MTM), Masked Modal Modeling (MMM) and Language Reconstruction (LR). We will describe them in details in the following section. 2) Matching tasks are designed to learn the alignment between different modalities, originating from Next Sentence Prediction (NSP) of BERT (Devlin et al. 2018). For example, Video Language Matching (VLM) is the classical matching task, which aims at matching video and text modalities. Some researchers also introduce the audio modality for further matching objective (Akbari et al. 2021). 3) Ordering tasks are to shuffle the sequence at the input side and force the model to recognize the original sequence order. For example, Frame Ordering Modeling (FOM) is specifically designed to exploit the temporal nature of video sequence and Sentence Ordering Modeling (SOM) is designed for the text modality.
Among all commonly used proxy tasks, Self-Supervised Learning (SSL) is the dominant strategy adopted in order to adapt to the situation that pre-training requires massive training data. SSL is one type of UnSupervised Learning (USL) that generates labeled data automatically itself, which inspires the model to learn the inherent co-occurrence relationships of data. For example, in the sentence completion task such as "I like ___ books", a well-trained language model should fill in the blank with the word "reading". In Video-Language pre-training, Masked Language Modeling (MLM) and Mask Frame Modeling (MFM) are two widely used SSL proxy tasks .
Contrastive Learning (CL) ) has recently become an important component in self-supervised learning for Video-Language pre-training. Different from generating masked tokens with measuring L2 distance, it embeds the same samples close to each other while trying to push away the embeddings from different samples. An extensive survey of CL can be found in (Jaiswal et al. 2020).
In the remainder of this section, we introduce some widely used proxy tasks (as summarized in Tab. 1) during Video-Language pre-training. For the following formulas, we use the general notations of , , as word sequence, video sequence and the union tokens of and .
, , refer to the corresponding masked tokes.
Masked Language Modeling (MLM) was first referred
to as a cloze task in (WL 1953) and then adapted as a proxy task during the pre-training of BERT (Devlin et al. 2018). Original MLM is to randomly mask out a fixed percentage of words from the input sentence, and then predict the masked words based on other word tokens. MLM used in Video-Language pre-training not only learns the inherent co-occurrence relationships of sentence but also combines the visual information with the sentence. For example, as elaborated in ActBERT (Zhu and Yang 2020), when a verb is masked out, the task forces the model to extract relevant action features for more accurate prediction. When a noun or a description of noun is masked out, visual features of related object can provide contextual information. Empirically, the masking percentage is always set 15%. The loss function of MLM can be defined as:
Masked Frame Modeling (MFM) is similar to MLM
in that it simply changes the sentence to the video sequence. That is, the frame tokens are masked for prediction according to the contextual frames and the input text for semantic constraints. However, since a video is continuous, with no fixed vocabulary as text, researchers make different adjustments on the input side or loss objective side for the MFM task. We categorize MFM into three sub tasks according to loss functions: 1) MFM with Cross Entropy (MFMCE), 2) MFM with Regression (MFMR), and 3) MFM with Contrastive Learning (MFMCL).
The typical examples of MFMCL can be found in VideoBERT (Sun et al. 2019) and ActBERT (Zhu and Yang 2020). VideoBERT splits the continuous videos into clip tokens and clusters clip tokens into a fixed size of dictionary by hierarchical k-means. In this way, the masked video feature can be predicted as video word with class likelihood. ActBERT extracts the action concept and local object feature from the video and the model is forced to predict the action category and object category of masked video tokens respectively. The loss function of MFMCL can be defined as: The typical examples of MFMR can be found in HERO , which learns to regress the output on each masked frame to its visual features. HERO uses L2 regression between the input video feature and the output video feature ℎ( ): However, it is hard to reconstruct the original video feature with regression as a video contains rich information. MFMCO adapts Contrastive Learning ) to maximize the Mutual Information (MI) between the masked video tokens and the original video tokens:
Masked Token Modeling (MTM) unifies MLM and
MFM in one loss function. It is proposed by Xu et al. (Xu et al. 2021) and the formula is defined as: Compared with MLM and MFM, MTM learns joint token embeddings for both video and text tokens. Furthermore, it also expands the contrasted negative samples in two separate losses for MFM and MLM. as part of the pre-training strategy and later is formally proposed by VLM (Xu et al. 2021). It masks either all video tokens or all text tokens, which encourages the model to use tokens from one modality to recover tokens from the other modality. The objective function employs NCE as in MTM, and experiments in VLM (Xu et al. 2021) have proved its effectiveness especially for text-based video retrieval (Xu et al. 2016).
Language Reconstruction (LR) LR is a generative task,
which aims to enable the pre-trained model with the ability of video caption generation. The difference between LR and masked language method (MLM and MMM with all text tokens being masked) is that LR generates sentence from left to right, which means the model only attends to the former text tokens and video tokens when predicting the next text token. The loss function is: where ′ is the groundtruth of word sequence and is the masked version.
Video Language Matching (VLM) aims to learn the alignment between video and language. There are different task forms of VLM and we classify them into 1) Global Video Language Matching (GVLM) and 2) Local Video Language Matching (LVLM). For the GVLM, one objective function is adapted from the Next Sentence Prediction (NSP) task used by BERT (Devlin et al. 2018), which takes in the hidden state of special token [cls] to a FC layer for binary classification. The objective function is: where y=1 if v and w are matched. Another VLM is to match the sequence embedding of the two modalities. Specifically, it transfers the 2 embedding sequence of video and language into 2 single feature by mean pooling or linear transfer, then it forces the paired samples closer while pushes away different ones by MIL-NCE (Miech et al. 2020) or other functions. This objective is usually used in pre-training models with multi-stream structure, which does not contain the special token [cls] for direct matching prediction. The example objective function ) is: wherê ,̃ ,̂ ,̃ are mean pooling of video sequence and text sequence respectively, the negative pairs , take negative video clips or captions within the batch B after fixing or .
Another VLM aims to align video and language locally, thus we abbreviate it as LVLM (Local Video Language Matching). It is first proposed in HERO ) that matches video and language at the frame level. That is, computing query-video matching score by dot product: = ∈ ℝ , where q is the query obtained from language sequence. Two trainable 1D CNNs followed by softmax operation are applied to the matching scores to get two probability vectors , , which represent the probability of every position being the start and the end of the groundtruth span. The objective function uses cross-entropy loss and can be summarized as:
Sentence Ordering Modeling (SOM) SOM is first pro-
posed in VICTOR (Lei et al. 2021a), which aims to learn the relationships of text tokens from sequential perspective. Specifically, 15% sentences are selected, randomly split into 3 segments and shuffled by a random permuted order. Therefore, it can be modeled as a 3!-class classification problem.
To be specific, after multi-modal fusion, the embedding of special token [cls] is input into the FC layer followed by a softmax operation for classification. The overall objective function is: where y is the groundtruth of segment order and is the shuffled word sequence.
Frame Ordering Modeling (FOM) FOM is proposed
in VICTOR (Lei et al. 2021a) and HERO . The core idea is to randomly shuffle a fixed percentage of frames and predict their original order. VICTOR (Lei et al. 2021a) randomly selects to shuffle 15% frames. The embedding of each shuffled frame is transformed through a FC layer, followed with softmax operation for -class classification, where is the maximum length of frame sequence. HERO ) also randomly selects 15% of frames to be shuffled. The embeddings of all frames are transformed through a FC layer, followed with softmax operation to produce a probability matrix ∈ ℝ × . , represents the scores of the i-th frame that belongs to the j-th time stamp. The two types of FOM can be summarized into one objective function: where y is the groundtruth of frame order and is the shuffled frame sequence.
Video-Language Downstream Tasks
The target of pre-training is to better adapt the learned knowledge from a large corpus to downstream tasks via transfer learning (Belinkov et al. 2017). Representative downstream tasks also play the role in evaluating pre-trained models. For better transfer impact, we need to consider the model structure and choose appropriate transferring method for each downstream task. The common downstream tasks that appear in the Video-Language pre-training include generative tasks and classification tasks. We introduce the task requirements and how to transfer the knowledge from pretraining to downstream tasks in the following subsections. ) is defined to retrieve a relevant video/video segment given an input text query. It requires model to map the video and text modality into a common semantic embedding space. Since the proxy task of VLM aims at learning the alignment between video and text, many works (Zhu and Yang 2020;Li et al. 2020;Luo et al. 2020;Lei et al. 2021b) adapt the proxy task of VLM to calculate the matching score of these two modalities directly. ) is defined to classify the action category of the given video/video segment, which is a representative classification task for video understanding. To transfer pre-trained knowledge to action recognition, works in (Sun et al. 2020;Lei et al. 2021a) use the pre-trained models as feature extractors and finetune a linear classifier added on the top of pre-trained model for action recognition. (Ding and Xu 2018) is designed to predict action label of given video/video segment at the frame level. It is also a classification task with video as the only input. To apply pre-trained models to action segmentation, several works (Zhu and Yang 2020;Xu et al. 2021) use the pre-trained models as feature extractors and add a linear classifier upon the extracted video features.
Action
Step Localization is first proposed in Cross Task , which aims to recognize action steps in instructional videos. The difference between action step localization and action recognition is that for the step localization, event is described with manual phrase but not from fixed label dictionary. To apply pre-trained models to action step localization, works in (Zhu and Yang 2020;Luo et al. 2020;Xu et al. 2021) regard manual phrase as text description and calculate its relevance score with input video/video segment by either dot production or linearly transforming the embedding of [cls] . (Tapaswi et al. 2016;Lei et al. 2018;Jang et al. 2017) aims to automatically answer natural language questions given a context video. VideoQA applied in Video-Language pre-training can be divided into multiple choices task or fill-in-the-blank task according to the types of the answers, both of which can be handled as classification tasks. For multi-choice VideoQA, works in (Zhu and Yang 2020;Li et al. 2020) feed candidate answer at the end of query sentence to generate QA-aware global representations, and input the global representations into MLP based classifier to obtain the matching score. The final choice is made by selecting the candidates with the max matching score. For fill-in-the-blank VideoQA, ActBERT (Zhu and Yang 2020) proposes a similar method, which adds a linear classifier upon the cross-modal feature but without the input of candidate text.
Video Question Answering
Video Captioning (Chen et al. 2019;Zhou et al. 2018b) is the task of generating a natural-language utterance for the given video, which is the only generative task among the downstream tasks introduced in this paper. It is one of the most typical tasks for multi-modal understanding and nearly all works related to Video-Language pre-training evaluate their pre-trained models on this task. To transfer pre-trained knowledge to video captioning, works in (Sun et al. 2019;Zhu and Yang 2020;Li et al. 2020) use pretrained models as video feature extractor or video encoder and add a transformer-based decoder for finetuning. Works in (Xu et al. 2021) transfer a single encoder to generate word sequence by reusing the pre-trained model as prediction heads. Work in ) includes a generative task in the pre-training stage by adding a transformer decoder, which reduces the gap between the proxy task and the video captioning task. As shown in above introduction, Video-Language pretraining works focus more on classification task. Improving the pre-trained model's ability especially for generation can be further explored. What's more, in addition to the downstream tasks we listed above, other downstream tasks such as multi-modal sentiment analysis (Zadeh et al. 2017), imagebased retrieval (Wang et al. 2017) have also been explored recently.
Video-Language Datasets
Compared with CNNs, transformer based frameworks rely heavily on massive datasets especially for pre-training. The quality and quantity of video datasets matter a lot to model's performance. In this section, we divide the commonly used video datasets into 3 categories according to the types of their annotations: label-based datasets, caption based datasets andother datasets. Tab. 2 summarizes all mentioned datasets.
Label Based Datasets
Label Based Datasets are the datasets with labels at the video level. They are widely used for classification tasks such as action recognization. For example, HMDB51 (Kuehne et al. 2011) contains 6,841 videos from 51 action categories in total. UCF101 (Soomro et al. 2012), MPII Cooking (Rohrbach et al. 2012), Kinetics series (Kay et al. 2017) and AVA (Gu et al. 2018) are the other representative datasets.
Other Datasets
In addition to the caption and label annotations, other types of annotations are used for other downstream-tasks. As shown in Tab. 2, TVQA (Lei et al. 2018) is a videoQA dataset based on 6 popular TV shows, with 460 hours of videos and 152.5K human-written QA pairs in total. Each query provides 5 candidates with one correct answer, the correct answer is also marked with start and end time stamps for further inference. COIN (Tang et al. 2019) is designed for COmprehensive INstructional video analysis, which is organized with a 3-hierarchical structure, from domain, task, to step. The dataset contains 11,827 instructional videos in total with 12 domains, 180 tasks, and 778 pre-defined steps. As all the videos are annotated with a series of step descriptions and the corresponding temporal boundaries, COIN is commonly used for action segmentation task. CrossTask ) contains 4.7k instructional videos crawled from YouTube, related to 83 tasks. For each task, an ordered list of steps with short descriptions are provided. Works in (Zhu and Yang 2020;Luo et al. 2020) evaluate their pre-trained models on the task of Action Step Localization ) based on this dataset.
Video-Language Transformer Models
In this section, we provide an overview of Transformer based models for Video-Language pre-training in Fig. 2. We roughly divide different models into two categories based on their model structure: Single-Stream Transformers and Multi-Stream Transformers. For the Single-Stream Transformers, features/embeddings of different modalities are input into a single transformer to capture their intra and inter modality information. Multi-Stream Transformers input each modality into independent transformers to capture Despite the differences in model structure, most models take caption tokens and video tokens as inputs, while DeCEMBERT takes ASR captions as additional text information, ActBERT takes object regions as additional visual information and VATT takes audio as additional modality information. As for modal encoders, most models apply modality encoders to extract modality features while VATT abandons them.
information within modalities and then build cross-modal relationship via for example another transformer. In addition to the model structure, the distinctions across different methods relate to their inputs, proxy tasks and downstream tasks and benchmarks, which we summarize in Tab. 3 and describe in details below.
VideoBERT
VideoBERT (Sun et al. 2019) is the first to explore Video-Language representation with transformer based pretraining method. It follows the single-stream structure, porting the original BERT structure to the multi-modal domain as illustrated in Fig. 2-(1). Specifically, it inputs the combination of video tokens and linguistic sentence into multilayer transformers, training the model to learn the correlation between video and text by predicting masked tokens. VideoBERT shows the ability of simple transformer structure to learn high level video representations that capture semantic meaning and long-range temporal dependencies.
To discretize continuous videos as discrete word tokens, they cut the video into small clips of fixed length and cluster the tokens to build a video dictionary. In pre-training stage, the model is trained with proxy tasks of MLM, MFM and VLM, corresponding to the feature learning in text-only domain, video-only domain, and video-text domain. Although with the simple proxy tasks and plain model structure, VideoBERT shows great performance on the downstream tasks of zero-shot action classification and video captioning. The model is initialized with the pre-trained BERT weights, the video token is generated based on the S3D (Xie et al. 2018) backbone. All experiments are applied on the cooking domain, with pre-training on the large scale of cooking videos crawled from YouTube by authors themselves and evaluating on the YouCookII benchmark dataset (Zhou et al. 2018a). Table 3 A summary of Video-Language Pre-training methods.
HERO
As illustrated in Fig. 2-(2), Li et al. (Li et al. 2020) propose HERO, a Hierarchical EncodeR for Omni representation learning, which contains a cross-modal transformer to fuse video frame sequence and corresponding sentence, and a temporal transformer to learn contextualized video embeddings from the global context. Previous works simply adapt proxy tasks of masking (MLM) and matching (VLM) that originated from NLP domain. HERO firstly designs the proxy tasks of LVLM (Local Video Language Matching) and FOM (Frame Order Modeling), which consider the sequential nature of videos. These two proxy tasks have been described in Section 2.2.1. The experiments of HERO prove that hierarchical transformer structure and new proxy tasks are both beneficial to downstream tasks. Li et al. ) also expand the pre-training datasets from instructional video domain to TV or movie domain. They find that text-based video-moment retrieval is more sensitive to domain gaps. In other words, keeping dataset domain consistent, text-based video retrieval could achieve the same or better performance with less pre-training data.
To be more specific, HERO extracts both 2D and 3D video features with ResNet (He et al. 2016) and Slow-Fast (Feichtenhofer et al. 2019) respectively. The crossmodal transformer takes the combination of video sequence and text sequence as input to learn contextualized embeddings through cross-modal attention. The output of visual embeddings are further input into temporal transformer to learn contextualized embeddings from the global video context. HERO applies the proxy tasks of MLM, MFM, VLM and FOM in pre-training stage and transfers to downstream tasks of video retrieval, videoQA, video-and-language inference and video captioning. The ablation study shows that FOM can effectively benefit downstream tasks that rely on temporal reasoning (such as QA tasks), VLM for both global and local alignment can benefit the retrieval tasks. Lei et al. (Lei et al. 2021b) propose a generic framework ClipBERT for video-text representation learning that could be trained in end-to-end manner. Different from previous works that extract video features from pre-trained backbone such as S3D (Xie et al. 2018), ClipBERT directly samples a few frames from each video clip, using 2D CNN as backbone instead of 3D CNN for lower memory cost and better computation efficiency. Based on 2D visual backbone, they also demonstrate that image-text pre-training on COCO (Chen et al. 2015) and Visual Genome Captions (Krishna et al. 2017b) benefits video-text tasks. ClipBERT adopts a sparse sampling strategy, including sampling a few frames from each clip and using only a single or a few sampled clips instead of full-length videos. The experiments show 1 or 2 frames per clip and 2 clips per video is sufficient for effective Video-Language pre-training.
ClipBERT
The concrete structure of ClipBERT is single-stream ( Fig. 2-(3)), the video input is patch sequence of a single clip.After 2D backbone generates T visual feature map for T frames of each single clip, a temporal fusion layer is applied to aggregate the frame-level feature maps into a single cliplevel feature map. A cross transformer is then applied to combine the clip feature map and text sequence to capture the cross-modal relationship. During the inference, when multiple clips are used, the predictions are fused together as the final output. ClipBERT uses MLM and VLM objectives to optimize the model, the pre-trained weights are further finetuned to text-based video retrieval and videoQA on 6 benchmarks. Tang et al. (Tang et al. 2021) propose the approach of Dense Captions and Entropy Minimization (DeCEMBERT) to alleviate the problem that the automatically generated captions in pre-training dataset like Howto100M (Miech et al. 2019) are noisy and sometimes unaligned with video content. To be specific, the original caption may not describe the rich content of the corresponding video or contains only irrelevant words due to recognition error of ASR. Therefore, DeCEMBERT uses dense captions ) generated from (Yang et al. 2017) as additional language input for the model learning. To better align video with ASR captions, DeCEMBERT propose a constrained attention loss that encourages the model to select the best matched ASR caption from a pool of continuous caption candidates.
DeCEMBERT
As illustrated in Fig. 2-(4), DeCEMBERT applies the single-stream structure, using a BERT like transformer to encode the relationship of video features, dense captions and a set of continuous ASR captions. The whole model is pre-trained with MLM, VLM tasks and finetuned on video captioning, text-based video retrieval and videoQA. Comprehensive experiments demonstrate that DECEMBERT is an improved pre-training method for learning from noisy, unlabeled dataset.
VLM
Previous methods Li et al. 2020) propose either multiple transformer encoders or a single crossmodal encoder but requires both modalities as inputs, What's more, existing pre-training tasks tend to be more and more task-specific, limiting the extensibility and generalization ability of pre-trained models. In contrast, VLM (Video-Language Model) is a task-agnostic model with BERT like cross-model Transformer that can accept text, video, or both as input.
VLM introduces two new schemes of masked tasks: Masked Modality Modeling (MMM) and Masked Token Modeling (MTM). MMM is to randomly mask a whole modality for a portion of training examples, which forces the encoder to reconstruct the masked modality based on the tokens from the other modality. MTM is to randomly mask a fixed portion of tokens (both video or language tokens) and predict them from negative candidates, which unifies the losses on MLM and MVM. MMM has been validated to be effective especially for text-based video retrieval and MTM performs better than MLM+MVM. VLM is evaluated on the downstream tasks of text-based video retrieval, action segmentation, action step localization and videoQA. To apply the BERT like model with single encoder to generative tasks such as video captioning, VLM uses a masked attention map to make the future text tokens unavailable. Based on that, VLM re-use the language model heads as prediction heads for generation with no extra decoder architecture. Experimental results show that VLM can maintain competitive performance while requiring less parameters. Akbari et al. (Akbari et al. 2021) present an end-toend framework VATT (Video-Audio-Text Transformer) for leaning multi-modal representations from raw video, audio and text. To be specific, they partition the raw video frames into a sequence of [ ∕ ] × [ ∕ℎ] × [ ∕ ] patches, where T, H, W correspond to video's temporal, height, width dimension respectively. The raw audio waveform is segmented on its temporal dimension. The word token is represented by one-hot vector. These three modality sequences are transformed by linear projection but not pretrained backbones as previous works do. To obtain inherent co-occurrence relationships of three modalities, Akbari et al. (Akbari et al. 2021) adopt the most widely used transformer architecture (ViT) except keeping the layer of tokenization and linear projection reserved for each modality separately. VATT is optimized by matching video-audio pairs and video-text pairs with common space projection and contrastive learning. The whole model is pre-trained on Howto100M (Miech et al. 2019) providing video-audiotext triplets and AudioSet (Gemmeke et al. 2017) providing audio-text pairs. After pre-training, VATT is finetuned on the downstream tasks of action recognition, audio event classification (Dai et al. 2017), text-based video retrieval and image classification. The experiment results of image classification on ImageNet (Deng et al. 2009) demonstrate that VATT can be adapted from video domain to image domain.
VATT
In conclusion, VATT validates that large-scale self supervised pre-training is a promising direction to learn multimodal representation (video, text, audio) with pure attentionbased model and end-to-end training. VICTOR (Lei et al. 2021a) stands for VIdeo-language understanding via Contrastive mulT imOdal pRe-training, which is trained on Chinese Video-Language dataset. VIC-TOR follows the single-stream model structure, with an encoder transformer to obtain the cross-modal relationship, a decoder transformer for generative tasks. What's more, inspired by MoCo (He et al. 2020) that expands negative samples with memory bank and momentum updating for better contrastive learning, VICTOR involves memory queues that save the negative samples for calculating contrastive losses. Synchronously, another network symmetric to the main Query network named Key network is applied to embed negative samples.
VICTOR
Due to the absence of Chinese pre-training dataset, Lei et al. (Lei et al. 2021a) collect Alivol-10M from e-commerce platform with standard descriptions and corresponding product videos. The details have been described in Section 2.3. Lei et al. (Lei et al. 2021a) design new proxy tasks of Masked Frame Order Modeling (MFOM), Masked Sentence Order Modeling (MSOM) and Dual Video and Sentence Alignment (dual-VSA) for pre-training, where MFOM is to explore the sequential structure of videos by reordering the shuffled video sequence, MSOM is similar to MFOM but from the text perspective. For the dual-VSA (similar with VLM), they only take matched video-text pairs as inputs, utilizing the representation of frames/text to retrieve the representation of corresponding text/frames. In other words, the negative samples come from memory bank would only go through Key transformer network as the authors point out that inputting in the mismatched video and text would hamper the pre-training process of multi-modal encoder. The pre-trained weights of VICTOR are further transferred to downstream tasks of multi-level video classification, content-based video recommendation, multi-modal video captioning, and cross-modal retrieval that with both text and image as input query. CBT (Sun et al. 2020) propose noise contrastive estimation (NCE) (Józefowicz et al. 2016) as the loss objective for Video-Language learning, which preserves the fine-grained information of video compared to vector quantization (VQ) and softmax loss in VideoBERT. The model contains 3 components as shown in Fig. 2-(8): one text transformer (BERT) to embed discrete text features, one visual transformer that takes in the continuous video features and a third crossmodal transformer to embed mutual information between two modalities. CBT extends the BERT structure to multistream structure and verifies the effectiveness of NCE loss for learning cross-modal features.
CBT
In pre-training stage, two single modal transformers learn video and text representations respectively via contrastive learning. The third cross-modal transformer combines the two modal sequences, computes their similarity score and learns the relationship of paired video and sentence by NCE loss. Sun et al. (Sun et al. 2020) propose curriculum learning strategy by first pre-training the S3D (Xie et al. 2018) backbone and then finetuning the last block of S3D with visual transformer using visual loss. Both pre-trained visual features and cross-modal features are evaluated on downstream tasks of action recognition, action anticipation, video captioning and video segmentation. Zhu et al. (Zhu and Yang 2020) introduce global action and local regional objects as visual inputs to learn joint video-text representations. ActBERT is a multi-stream model ( Fig. 2-(9)) with Tangled Transformer block to enhance the communications between different sources, which is illustrated in Fig. 3. Previous multi-stream structure always use an extra transformer layer to encode inter relationship of multi-modal information, while the Tangled Transformer block uses co-attentional transformer layer ) that the key-value pairs from one modality could pass through the other modality. Experiments on various Video-Language related downstream tasks verify that the global action information and local object clues are complementary.
ActBERT
For the global action input, they extract verbs from the corresponding descriptions of each video clip and build a verb dictionary. Then a 3D network classifier is trained to predict the each video clip's verb labels. The action feature of each clip is extracted from the 3D network classifier after global averaging layer. For the input of local object regions, authors use pre-trained Faster-RCNN (Ren et al. 2015) to extract the bounding boxes and the corresponding visual features. ActBERT is pre-trained on the proxy tasks of MLM, MAM (Masked Action Modeling), MOM (Masked Object Modeling) and VLM. The pre-trained weights are further transferred to 5 downstream tasks of video captioning, action segmentation, action step localization, video retrieval and videoQA.
Univl
Previous multi-modal models are pre-trained on understanding tasks, which leads to discrepancy for generative downstream tasks such as video captioning. Univl ) is the first one to pre-train model on both understanding and generative proxy tasks. Univl follows multistream structure as illustrated in Fig. 2-(10), which contains two single transformer encoders to embed video and text respectively, a cross-modal transformer to fully interact the text and video embeddings, a decoder for generation tasks.
Univl uses VLM, MFM, MLM and LR (Language Reconstruction) as proxy tasks for pre-training, transfers to the downstream tasks of text-based video retrieval, multimodal video captioning, action segmentation, action step localization and multi-modal sentiment analysis. There are two types of VLM in Univl. The first one is to train two single modal encoders by matching their video and text sequence with NCE loss. The other is to train the cross modal transformer by inputting the special token [cls] to predict the alignment score of given video and sentence. The experiments show that the later type of VLM applied on the cross-modal transformer benefits more on retrieval tasks. Univl develops a three stage training strategy for pretraining. Firstly, Univl trains the weights of text BERT and video transformer by matching their output sequences with NCE objective. Next, the whole model is trained by all objectives with smaller learning rate. Furthermore, Univl enhances its video representations by masking the whole text tokens with a 15% possibility. The step-by-step training strategy improves the pre-training process consistently.
Summary & Comparison
In this part, we provide a summary and comparison of the above mentioned methods from the perspectives of model structure, proxy tasks, training strategy, and performance on widely used benchmarks: MSR-VTT for textbased video retrieval and YouCookII for video captioning. The paradigm of the above methods can be summarized as building models containing transformer encoders to learn intra and inter modality representations, pre-training models on pre-designed proxy tasks and finetuning/evaluating on varies downstream tasks.
Model Structure
For the transformer blocks, most works apply the original transformer structure directly, while some works make adjustments to adapt to multi-modal processing. For example, VATT shares the weight of self attention layer across different modalities, but keeps the layer of tokenization and linear projection independent for each modality. VLM uses different attention masks to accommodate downstream tasks that require different modalities. ActBERT use Tangled Transformer blocks to build relationship of different modalities across independent transformer blocks.
For the word embedding, most works apply WordPiece embeddings ) with a 30,000 token vocabulary provided by BERT (Devlin et al. 2018). For the video embedding, most works extract video features with fixed visual backbone S3D (Xie et al. 2018), which is pre-trained by Miech (Miech et al. 2020). There exists exceptions, for example, VICTOR (Lei et al. 2021a) utilizes 2D backbone of Inception-V4 (Szegedy et al. 2017) pre-trained on Ima-geNet (Deng et al. 2009) to extract visual features for each frame. HERO ) combines 2D features from Resnet (He et al. 2016) and 3D features from SlowFast (Feichtenhofer et al. 2019). ClipBERT (Lei et al. 2021b) and VATT (Akbari et al. 2021) design end-to-end frameworks without fixed visual backbone.
Proxy Tasks
The selection or designing of the proxy tasks directly determines the model training objectives and further affects the performance on downstream tasks. Most pretraining works inherit the masking based tasks and matching based tasks from BERT, learning the correlation within the same modality and across different modalities. HERO ) and VICTOR (Lei et al. 2021a) design ordering tasks to explore the sequential structure of videos, which have been demonstrated beneficial to downstream tasks that rely on temporal reasoning such as videoQA. Univl ) and VLM (Xu et al. 2021) masking out the whole modalities and reconstruct it based on other modalities benefits the retrieval task.
Training Strategy A few works develop the stage-bystage pre-training methods instead of training the whole model in one step. For example, Univl , the representative of Multi-Stream transformer, trains the transformer encoder for each modality first and then the whole model with decreasing learning rate. CBT (Sun et al. 2020) uses a curriculum learning strategy by first pre-training the visual feature extractor S3D and then jointly fine-tuning the last blocks of S3D with the visual transformer using the CBT visual loss. Compared to training in one step, training stageby-stage makes the pre-training progress more smoothing.
Downstream Tasks
To evaluate the pre-trained models, the standard approach is to transfer the pre-trained weight to other down-stream tasks. We compare the above methods on matching task of text-based video retrieval and generative task of video captioning. The results are shown in Tab. 4 and 5 respectively. We divide models according to their structure. VLM (Xu et al. 2021) generally performs the best across Single-Stream models for both retrieval and captioning tasks. Among Multi-Stream models, Univl ) outperforms other models generally. VICTOR (Lei et al. 2021a) is not included since it is pre-trained and evaluated only on Chinese dataset.
Discussion
Pre-training has shown obvious improvements on various Video-Language tasks compared to traditional methods. Nevertheless, the potential of transformer structure on Video-Language has not been fully explored. There still exists several challenges to be tackled. In this section, we discuss these challenges and possible future directions.
Pre-training Dataset
Since transformers lack some inductive biases as CNNs, it requires large scale of datasets for pre-training. Consequently, the quality, quantity and diversity of dataset has significant influence on the general performance of transformers. For the problem of quantity, the most commonly used dataset for pre-training so far is Howto100M (Miech et al. 2019), which contains over 100M video-sentence pairs. Experiments on (Miech et al. 2019) prove that increasing the amount of training data improves the performance of variable evaluated tasks. For the problem of quality, since large scale of manual video annotations are expensive, the corresponding captions of videos are usually generated from ASR automatically (Miech et al. 2019;Sun et al. 2019), which inevitably introduces mistakes and misalignments to captions for corresponding video content. DeCEM-BERT (Tang et al. 2021) has mitigated these problems by adding extra inputs (dense video captions ) and adjusting the training objective. For the problem of diversity, pre-training dataset used in VideoBERT (Sun et al. 2019) focuses on cooking domain. Videos of Alivol-10M (Lei et al. 2021a) come from E-commerce website. Videos of Howto100M (Miech et al. 2019) are crawled from YouTube. These pre-training datasets are mainly from a single domain and inevitably have domain gaps with various downstream datasets, which has been demonstrated to be harmful to the performance of pre-trained models. On this topic, Zhou et al. proved that pre-training on a considerably small subset of domain-focused data can effectively close the source-target domain gap and achieve significant performance gain. Similar conclusion can be found in HERO ) that domain gap of finetuning and pre-training can not be eliminated by data volume. In conclusion, although a lot of explorations have been done, there is still a long way to go in order to improve the quantity, quality and diversity of pre-training datasets.
Video-Language Transformer Designs
Existing works mostly follow the paradigm from NLP domain and make adjustments to adapt to Video-Language processing, which includes using multi-stream structure to meet the needs of multimodal input, designing reordering proxy tasks to exploit the sequential structure of videos, and adding audio modality as supplementary information. Although the results of these applications are quite encouraging, current methods require further intuition to better match the Video-Language tasks. Firstly, how to deal with the visual backbone properly remains unsolved. Existing works either apply a independently trained visual backbone to extract video features (Xie et al. 2018) or train a model that includes 2D CNN backbone in the end-to-end manner (Lei et al. 2021b). The first type of approach not only leads to domain gap between feature extraction and pre-training, but also hinders model improvement due to the loss of finegrained visual information. The other type of approach tends to lose the temporal information in the video. Secondly, standard evaluation of Video-Language pre-training is an urgent need for sustainable development in this field. So far, different models are evaluated on different downstream tasks/datasets with different detailed settings, which is unfair to compare their performance. A unified benchmark is needed to evaluate different pre-training methods, such as GLUE (Wang et al. 2018) in NLP. VALUE (Li et al. 2021) has proposed an evaluation benchmark that covers 11 datasets over 3 popular tasks including retrieval,caption and videoQA, but it has not yet been popularized.
Another promising direction is to improve the generalization ability of pre-trained models. As a collection of multiple modalities, a video contains more than semantic information. For example, ActBERT (Zhu and Yang 2020) uses fine-grained object regions of videos, VATT (Akbari et al. 2021) explores the inner relationship of frame sequence, audio and sentence. We believe that there exist more clues that can mined from videos, such as scene information, character information. How to make use of these information and transfer to more downstream tasks are promising future directions. On the basis of multiple input, video analysis should not be limited to analysis of general semantics. Tasks related to image, audio, and text modalities are expected to be covered by a comprehensive model. What's more, we notice that existing works mostly focus on the domains of activity, films, and TV shows. Other domains such as medical field, surveillance recordings have a lot of potential applications as well.
Transformer Efficiency
A well-known concern of transformer is the efficiency problems of quadratic time and memory complexity, which hinders transformer's scalability in practice. As mentioned in (Tay et al. 2020), model efficiency refers to both memory footprint and computation cost. For the memory efficient transformers, Lee et al. ( Lee et al. (2021) use weight sharing across layers and modalities to reduce overall model size. Similar idea is originated from Universal Transformers (Dehghani et al. 2019) and Albert (Lan et al. 2020). For the computation efficient transformers, Michel et al. (Michel et al. 2019) remove some heads at test time without impacting performance. Prasanna et al. (2020) also reduce the computation cost by pruning and decomposing original transformer structure.
In summary, studies of efficient transformers are mainly from NLP domain, which focus more on handling longer sequence input. Video naturally conveys more information than pure text. Video-Language processing requires deeper model structure, larger parameters and thus has higher requirements for hardware and computation.
Conclusion
Pre-training has become a popular approach in NLP and has been further applied in vision tasks. Comparing to other vision-language pre-training works, less pre-training works reported in the Video-Language area. We therefore conduct a comprehensive overview of pre-training methods for Video-Language processing in this paper. This survey first reviews the background knowledge related to transformer, then summarizes the pre-training and finetuning process of Video-Language learning by introducing the common proxy tasks and downstream tasks respectively. Furthermore, we describe commonly used video datasets according to their scale, annotation type etc. We also summarize the state of the art transformer models for Video-Language learning, highlight their key strength and compare their down-stream performance. Finally, we conclude the paper with discussions of the current challenges and possible future research directions. | 2021-09-22T01:16:03.754Z | 2021-09-21T00:00:00.000 | {
"year": 2021,
"sha1": "dd4dd3ed1a95beb6e6712ea356a49d1ab818f616",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dd4dd3ed1a95beb6e6712ea356a49d1ab818f616",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237304989 | pes2o/s2orc | v3-fos-license | Macroevolutionary thermodynamics: Temperature and the tempo of evolution in the tropics
An influential hypothesis proposes that the tempo of evolution is faster in the tropics. Emerging evidence, including a study in this issue of PLOS Biology, challenges this view, raising new questions about the causes of Earth’s iconic latitudinal diversity gradient (LDG).
Biologists have long puzzled over the spectacular diversity of animals and plants from Earth's tropical regions. It is true that some tropical environments are not especially rich in species, and some groups of organisms show contrarian patterns with diversity peaks that occur far outside of the warm, humid tropics. Nonetheless, the big picture is clear: A vastly disproportionate fraction of Earth's terrestrial biodiversity is concentrated in tropical rainforests, and warm water reef environments similarly account for a large fraction of marine diversity. The extremes of tropical diversity transcend the ability of most humans to process it: Some Amazonian rainforests, for example, contain more species of trees in just a few hectares than are found in the entirety of Europe or North America [1]. In general, the most diverse tropical rainforests support order-of-magnitude increases in species richness relative to otherwise comparable temperate zone communities across a wide range of organisms. Despite decades of study, however, the causes of this latitudinal diversity gradient (LDG) remain elusive.
One of the most prominent hypotheses for the LDG is, loosely speaking, the idea that biological processes speed up in the tropics, potentially due to the kinetic effects of temperature on the rates of organismal processes. It seems obvious that the pulse of life should be faster under a torrid tropical sun, and-to naturalists who've spent time in lowland rainforests in particular-such a view accords well with perceptions of the humid tropics as a raging, steamy mess of species interactions that collectively generate the tangled web that is tropical diversity. It is generally accepted that temperature can affect metabolic rate and many other biological processes, including those involving ecological interactions between species (e.g., competition, predation, and herbivory). The specific mechanisms connecting thermal energy to biodiversity remain unclear. For example, they might involve the influence of temperature on rates of molecular evolution, which might then influence rates of speciation [2]. Or, species in warmer environments might live closer to their optimal body temperatures, thus enabling them to allocate more resources to performance-associated functions and leading to a systematic upgrading in the intensity of antagonistic or coevolutionary interactions between species [3]. Regardless of the specific mechanism, the general idea is captured by Brown [4], who notes that "'Diversity begets diversity' in the tropics, because 'the Red Queen runs faster when she is hot.'" Writing in PLOS Biology, Drury and colleagues [5] demonstrate that a central prediction of these "faster tropics" hypotheses fails to hold up. They predicted that, if certain types of ecological interactions between species are stronger in the tropics, then we should see a signal of those interactions in long-term patterns of trait evolution. In particular, the increased pressure to respond to species interactions in the tropics should result in faster overall rates of morphological evolution for tropical species. To test this hypothesis, the authors studied the rate of morphological evolution in birds, analyzing a large dataset on bill shape and body proportions from other recent studies [6] with a battery of sophisticated statistical models. These models allowed the rate of morphological change to differ systematically with latitude. Intriguingly, the models that best fit the data in some cases were those that allowed for strong interactions between species in driving patterns of divergence among closely related species that occupied that same biogeographic region (e.g., the neotropics). Thus, there is a partial signal of species interactions on the morphologies of species we see living together today, including those from both tropical and temperate regions. As suggested by the authors, these patterns might reflect a form of ecological character displacement, whereby morphologically similar species evolve differences that minimize their ecological overlap. But, surprisingly, the intensity of these effects shows no consistent relationship with latitude. The take-home message is that-at least for birds and the traits considered-species are not evolving more rapidly in the tropics.
Drury and colleagues note that their results contradict recent articles that have documented differences in phenotypic evolutionary rates across latitude, although the studies referenced generally looked at different types of traits (e.g., birdsong). They suggest several potential reasons for the discrepancies between their results and those prior studies. But, critically, these earlier studies generally did not report faster evolution in the tropics, but fasterAU : Pleasenotethatasper evolution in the temperate zone. Hence, the results of Drury and colleagues and the earlier studies all converge to a similar and more general finding, which is that the warm tropics really aren't so hot for macroevolution, at least as far as phenotypic evolutionary rates are concerned. By rejecting the simple explanations (faster evolution), new questions emerge about how and why tropical bird communities show such dramatic phenotypic and ecological diversity.
Morphological evolution is not the only process that fails to show the expected pattern of "heating up" in the tropics. A number of recent studies have found that rates of species formation are either unrelated to latitude or slower in the tropics [7][8][9]. These results argue strongly against temperature kinetic models of biodiversity, whereby faster speciation emerges from the effects of warmer temperatures in the tropics on mutation and metabolic rates [10]. Many of the same causal pathways that predict increased rates of speciation as a function of temperature would also apply to rates of morphological evolution: Increased mutation rates in the tropics, for example, should accelerate the tempo of phenotypic evolution due to increased mutational variance in traits. But, regardless of whether we consider phenotypic evolution (as in Drury and colleagues) or lineage diversification, there is simply no evidence for faster evolutionary rates in the tropics.
The results from Drury and colleagues [5] and other studies do not reject all possible causal pathways by which temperature or species interactions might facilitate high tropical diversity. Many phylogeny-based studies of species diversification and phenotypic evolution frame their interpretations through the lens of interspecific competition, ecological opportunity, and character displacement. Yet, numerous other types of interactions are relevant to global biodiversity patterns, and some of these interactions have scarcely been explored from a macroevolutionary perspective. Many such interactions have the potential to influence species richness and ecological diversity, perhaps through mechanisms that involve an indirect effect of temperature on equilibrium diversity levels. With more data on how host-pathogen, predator-prey, and other biotic interactions vary latitudinally, perhaps we will emerge with a greater understanding of the diverse mechanisms that contribute to the spectacular enrichment of tropical diversity. | 2021-08-27T05:18:16.814Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "aba4098fb09eb0f17956c265d37d7b3e5f822607",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3001368&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aba4098fb09eb0f17956c265d37d7b3e5f822607",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15744553 | pes2o/s2orc | v3-fos-license | Distributed under Creative Commons Cc-by 4.0 the Effect of Habitual and Experimental Antiperspirant and Deodorant Product Use on the Armpit Microbiome
An ever expanding body of research investigates the human microbiome in general and the skin microbiome in particular. Microbiomes vary greatly from individual to individual. Understanding the factors that account for this variation, however, has proven challenging, with many studies able to account statistically for just a small proportion of the inter-individual variation in the abundance, species richness or composition of bacteria. The human armpit has long been noted to host a high biomass bacterial community, and recent studies have highlighted substantial inter-individual variation in armpit bacteria, even relative to variation among individuals for other body habitats. One obvious potential explanation for this variation has to do with the use of personal hygiene products, particularly deodorants and antiperspirants. Here we experimentally manipulate product use to examine the abundance, species richness, and composition of bacterial communities that recolonize the armpits of people with different product use habits. In doing so, we find that when deodorant and antiperspirant use were stopped, culturable bacterial density increased and approached that found on individuals who regularly do not use any product. In addition, when antiperspirants were subsequently applied, bacterial density dramatically declined. These culture-based results are in line with sequence-based comparisons of the effects of long-term product use on bacterial species richness and composition. Sequence-based analyses suggested that individuals who habitually use antiperspirant tended to have a greater richness of bacterial OTUs in their armpits than those who use deodorant. In addition, individuals who used antiperspirants or deodorants long-term, but who stopped using product for two or more days as part of this study, had armpit communities dominated by Staphylococcaceae, whereas those of individuals in our study who habitually used no products were dominated by Corynebacterium. Collectively these results suggest a strong effect of product use on the bacterial composition of armpits. Although stopping the use of deodorant and antiperspirant similarly favors presence of Staphylococcaceae over Corynebacterium, their differential How to cite this article Urban et al. (2016), The effect of habitual and experimental antiperspirant and deodorant product use on the armpit microbiome. modes of action exert strikingly different effects on the richness of other bacteria living in armpit communities.
modes of action exert strikingly different effects on the richness of other bacteria living in armpit communities.
INTRODUCTION
Like the gut or the mouth, the human skin is covered with life. This life includes bacteria, fungi, Archaeans, bacteriophages, and even animals such as nematodes and Demodex mites (Marples, 1965;Grice & Segre, 2011;Kong & Segre, 2012). Since the 1950s it has been clear that the precise composition of the skin biome influences its effectiveness as a defensive layer against pathogens (Eichenwald et al., 1965), and contributes to bodily odors (Shelley, Hurley & Nicholas, 1953). Some species are better at defending our skin than others (Christensen & Brüggemann, 2014), just as some species produce different odors than do others (Leyden et al., 1981). What is unclear is the extent to which human behaviors influence the composition of skin microbes. Inasmuch as two types of products, antiperspirants and deodorants, are used daily in armpits by a large number of people (perhaps as many as 90% of people in the US, according to Benohanian, 2001), armpits represent an interesting context in which to explore the general phenomenon of how human behavior and product use influence skin microbes.
A long history of work focuses on the biology of the culturable bacteria in armpits (Shelley, Hurley & Nicholas, 1953;Marples, 1965;Leyden et al., 1981). More recent work has built upon this history to consider both those taxa that are culturable and those whose presence is only detectable (to date) through sequencing. In one of the first of this new wave of studies, Grice et al. (2009) sampled, cloned, and Sanger-sequenced bacteria from 20 body regions sampled from nine participants. The most prominent bacteria present in armpits were species of Corynebacterium, Staphylococcus, Betaproteobacteria, Clostridiales, Lactobacillus, Propionibacterium, and Streptococcus. Interestingly, bacterial residents of armpits were shown to be highly variable even across this small number of participants: four participants' communities were dominated by Corynebacterium species, three by Staphylococcus species, and two by Betaproteobacteria. Gao et al. (2010) also found large variation among individuals in the composition of armpit bacterial taxa (drawn from similar genera as in Grice et al., 2009). This high person-to-person variability stands in contrast to samples from other skin habitats, which show less inter-individual variation, and are locations where product use is less common (Costello et al., 2009;Caporaso et al., 2011;Hulcr et al., 2012) (although see Callewaert et al., 2013). This variability might simply reflect stochastic effects or even the sequencing of dead bacteria on the skin (Grice & Segre, 2011). However, Egert et al. (2011) found that most of the same common taxa in the armpits, including most/all of the common taxa found in Grice et al. (2009) and Gao et al. (2010), were the most metabolically active and contributed the most to armpit odor.
The high variability in armpit communities among individuals suggests that an unaccounted for factor, perhaps product use, might be exerting a strong influence on armpit bacteria, which may in turn have functional consequences for the host. Older, culture based studies suggest that the use of deodorants and antiperspirants appears to reduce the abundance of culturable bacterial taxa, particularly those of Corynebacterium, a slow-growing lineage of bacteria that plays a key role in the production of armpit odor (Leyden et al., 1981). This effect is not accidental inasmuch as the intent of underarm products has long been the reduction in armpit odor either through direct reduction in the biomass of bacteria, or through blocking the exudates of the apocrine glands, which become odiferous when metabolized by bacteria (Taylor et al., 2003;Wilke et al., 2007;Fredrich et al., 2013). Second, two recent studies by Callewaert et al. (2013) andCallewaert et al. (2014) found an association between product use and the diversity of bacteria in armpits. Although the Callewaert et al. (2014) study that specifically tested for effects of product use in nine people did not include a control group (people who habitually do not use any products), this work is clearly suggestive of a potentially large impact of underarm products on entire communities of armpit bacteria.
Here, we examine several questions related to product use and armpit bacterial communities. First, we test whether there is a direct relationship between the abundance of readily culturable bacteria and product use. Second, we use a sequencing based approach to consider whether there are long-term differences in the species richness and composition of armpit bacteria on individuals who habitually use antiperspirant, deodorant or no product. We also consider whether these differences are in line with what would be expected given the intended effects of underarm products in reducing the abundance of odor-causing armpit bacteria (i.e., primarily Corynebacterium), as well as the different mechanisms by which antiperspirants vs. deodorants achieve this reduction. Finally, we compare the abundance of two focal taxa, an OTU categorized as Staphylococcaceae (predominant in comparison to the OTU categorized as Staphylococcus) and an OTU of Corynebacterium, as a function of product use, gender (based on Callewaert et al.'s, 2013 work), and time since ceasing product use.
Participants
Eighteen individual citizen scientists (public participants in scientific research) were recruited through the NC Museum of Natural Sciences' Genomics & Microbiology Lab for armpit community sampling. Prior to the start of the experiment, the proposed study was reviewed and approved by the North Carolina State University Institutional Review Board for the Use of Human Subjects in Research (IRB#1987). All participants were supplied with an IRB authorized consent form and all provided their consent to participate (indicated by their signatures) prior to the start of the experiment. Individuals were recruited to represent three groups, each with equal numbers of men and women. One group of participants reported typically not using any deodorants or antiperspirants. Another group reported regular use of deodorant-only products. The third group reported regular use of antiperspirant products. Not everyone in our study used the same product brand, but all antiperspirant users used products containing aluminum zirconium trichlorohydrex Gly as the active ingredient. Although we designed our study to have equal group sizes, participant drop-out and product mis-classification during the course of the study resulted in a final sample of five participants each in the no product use (three men and two women) and deodorant-only use (three men and two women) groups, and seven participants (three men and four women) who used antiperspirant-containing products.
We experimentally altered product use by the participants over eight days. On the first day (Day 1), participants went about their normal habits (e.g., applying deodorant or antiperspirant if they normally wore it); we did not require that they apply product at any certain time of day or number of times per day, nor did we require that they shower a certain amount. All participants indicated showering/bathing 3-14 times per week (Table S1), and they were asked to continue their normal showering/bathing routine for this experimental week. On Days 2-6, participants were asked to discontinue product usage. During the last two days (Days 7 and 8) of sampling, all individuals, including those who did not normally use antiperspirant or deodorant, were asked to use an antiperspirant/deodorant product (we provided Secret Powder Fresh for women and Old Spice Fiji for men-both contained aluminum zirconium trichlorohydrex Gly as the active ingredient). On each of the eight days of the study, both armpits of each individual were swabbed once for 45-60 s between 11 am (EST) and 1 pm (EST) with a dual-tipped sterile BBL TM CultureSwab TM (Becton Dickinson and Company, Franklin Lakes, NJ, USA). A tradeoff exists between sampling participants many times (daily) over a short period of time and sampling less frequently over a longer period of time (weeks to months). We opted for more frequent sampling (daily for eight days) vs. longer-term sampling to assess shorter-term effects of ceasing product use in order to achieve a balance across competing considerations including participant compliance, available budget, supplies and personnel time.
Culture-based sampling
Bacteria sampled from the left and right armpits were cultured the same day they were collected by immersing one of the two sample swabs into 0.5 mL of phosphate buffered saline (PBS), mixing, and spreading 20 µl of this solution onto sterile LB agar plates. All cultures were incubated aerobically at 37 • C for approximately 22 h, and then stored at 4 • C to stop further colony growth. Plates were photographed and numbers of colonies occupying a standard central region of each plate (18.07 cm 2 ) were counted using Image J software (version 1.46, Rasband, 1997Rasband, -2014. The 18.07 cm 2 region on the plate does not necessarily equate to the same size region on a person's skin. Abundance counts for bacteria cultured from the left vs. right armpit were averaged to yield one abundance score (i.e., average number of CFU present on two culture plates) per person for each day of the experiment.
Culture-based analyses
To test the effects of time and change in product use on abundance of culturable bacteria, we ran several analyses of variance (ANOVA) on time intervals of interest, using SPSS version 21.0 (IBM Inc., Armonk, NY, USA). To determine whether regular use of underarm products exerts an effect on initial abundance immediately following disuse of products, we ran a one-way ANOVA on abundance of culturable bacteria sampled on Day 2 (the first day all subjects ceased use of products). We conducted mixed-model (day × product use × gender) ANOVAs to test effects of stopping product use and continued disuse on abundance of culturable bacteria (Day 1 vs. Day 2-6), and to test effect of antiperspirant application on abundance (Day 6 vs. Days 7-8).
Sequence-based sampling
Several extensive surveys of armpit microbes (e.g., Flores, Henley & Fierer, 2012) and our personal observations indicate that armpit samples show poor amplification for sequence-based analyses. In addition, our preliminary experiments indicated that presence of product (antiperspirant or deodorant) inhibited our ability to isolate and subsequently PCR amplify armpit bacterial DNA, so we chose to conduct our ''early'' sequence-based sampling on Day 3 (the second day of continued product disuse) and the ''late'' sampling on Day 6 (the fifth and final day of continued product disuse). The expectation was that on Day 3, residual effects of product use would remain persistent (if they existed) and that by Day 6 bacterial populations might have begun to recover from whatever such effects might be, and hence converge on a common, shared composition.
DNA was isolated from the second sample swab, stored dry at −20 • C for up to one month prior to isolation, using the PowerSoil DNA Isolation Kit (MO BIO Laboratories, Carlsbad, CA, USA) with minor modifications. Instead of using soil, the swab tip was swirled into the beads for approximately 5 s and removed before adding solution C1. For the elution, solution C6 was heated to 50 • C before being added in the final step of the protocol. Also, only 50 µl of solution C6 was used to elute the DNA. All isolated DNA samples were stored at −20 • C. The V4 region of 16S rDNA was PCR amplified from each DNA sample using Premix ExTaq (Takara Bio) with primers designed to amplify bacteria and Archaea (515F: GTGCCAGCMGCCGCGGTAA, and 806R: GGACTACHVGGGTWTCTAAT), modified to include Roche 454 adapters and index sequences as previously described (Hulcr et al., 2012). Reactions were set up such that there was a unique index for each subject, arm (left vs. right) and day. All PCRs, including a no-template control, were performed in triplicate and after thermocycling, each triplicate was pooled and purified using an UltraClean-htp 96-well PCR Clean-up kit (MO BIO Laboratories, Inc., Carlsbad, CA, USA). The purified PCR products were quantified with a Qubit 2.0 and dsDNA BR Assay Kit (Invitrogen, Grand Island, NY, USA) and an equal mass (110 ng) of each was mixed into a single pool of all individuals and days. The no-template control DNA was below detectable levels and thus the entire no-template reaction mixture was added to the pool to be sequenced, to allow for detection of possible contaminants (Salter et al., 2014). An ethanol precipitation was performed to concentrate the mixed pool of index products. The DNA was sent to Selah Genomics (Greenville, SC, USA) for Roche 454 next-generation pyrosequencing. Sequence data from both axillae at day 3 are deposited in NCBI as Bioproject PRJNA281417 (http://www.ncbi.nlm.nih.gov/bioproject/). The resulting sequences were analyzed using the QIIME (version 1.7.0) microbial community analysis software (Caporaso et al., 2010). This included initial filtering of DNA sequences using default parameters (to insure their minimum and maximum lengths were 200 bp and 1,000 bp respectively, that quality score was at least 25, and that no ambiguous or mismatched bases appeared in the primer sequence) and assignment of multiplexed reads to samples based on their indexed barcode. Sequences from each sample were clustered into Operational Taxonomic Units (OTUs) based on 97% sequence similarity. Using a strategy of de novo OTU picking in QIIME, a representative sequence was picked for each OTU and taxonomy was assigned using the uclust consensus taxonomy classifier. Consistent with previous studies (e.g., Hulcr et al., 2012) and with the default in QIIME, we assigned taxonomy to Level 6 (L6), which assigns OTUs to genus level. Before performing any further analyses, we rarefied our data to 1,000 reads per armpit sample. Four samples (only a single sample from each of four people distributed across categories of product use, gender, and day) had fewer than 1,000 reads and were excluded from further analyses. In order to retain all participants in our study without having unbalanced data, we randomly selected either left or right armpit for each person and analyzed one armpit for day 3 and the same armpit for day 6. Random selection was done using the RANDBETWEEN function in Microsoft Excel 2013, and selection of right vs. left armpit for each person is included in Table S2. All subsequent sequence based analyses were based on one armpit per person. Read counts of each OTU were exported as a matrix for subsequent analyses after the removal of singletons and doubletons from the dataset.
Sequence-based analyses
For all participants we used day 3 and day 6 (for person T we had to use day 5 sample as no day 6 sample existed) to assess richness (i.e., number of OTUs) and composition at the level of each individual (number of OTUs per individual).
Richness
The program Primer-E v.6.1.15 (PRIMER-E, Plymouth, UK) was used to calculate OTU richness (defined as number of OTUs present per person per day) based on the OTU table. A mixed model ANOVA was performed using SPSS (version 21.0) to determine the effect of day (early-day 3 vs. late-day 6, as described above), and product use (regular antiperspirant users, deodorant users, or participants who did not regularly use underarm products) on richness. The relationship between the abundance of culturable bacteria and richness was computed across all groups using a Pearson rank correlation.
Composition
We visualized the composition of armpit bacteria using non-metric multidimensional scaling ordination (NMDS) in Primer-E v.6.1.13 with PerMANOVA ext. 1.0.3 (Clarke & Gorley, 2006). To do this, we first constructed NMDS plots with 100 restarts and a Type I Kruskal fit scheme based on a Dissimilarity matrix of Bray-Curtis distances. To assess the relationship between product use and sampling period (early vs. late), we conducted a permuted multivariate analysis of variance (PerMANOVA) test with treatment group (product use categories described above) and sampling period and their interaction as factors, 9,999 iterations and Type III sums of squares. We conducted SIMPER analyses for each significant factor to determine the OTUs that contributed the most to pairwise between-group differences in ordination space (Table 1). For a full comparison of all OTUs that differed in abundance between the usage groups, we conducted a Metastats analysis (see http://metastats.cbcb.umd.edu/detection.html and White, Nagarajan & Pop, 2009) using pairwise comparisons of average sequence reads for each product use group. Because Corynebacterium and Staphylococcus have been previously shown to be important taxa in armpit communities (see above), we conducted additional analyses of their abundances (i.e., number of reads) using a mixed model ANOVA with the independent factors of product use, gender, day, and their interaction. Finally, we determined the relationship between the abundance of Corynebacterium and Staphylococcus across all groups using a Spearman rank correlation. We used SAS v.9.3 Statistical Software (SAS Institute, Inc., Cary, NC, USA) to conduct both of these analyses.
Culture-based results
If antiperspirants and deodorants negatively affect the abundance of bacteria in armpits, we would expect that on the first day of no product use (Day 2 in our experiment) the residual effects of products would lead to lower bacterial abundances in armpits of those individuals who use such products. As time progressed, we expect those abundances to increase among habitual product users (as bacterial populations recover). Finally, once antiperspirant was applied to (rebounded) armpit assemblages after five days of no product use, we expect abundances to decline. In order to focus on one experimental treatment (product vs. no product use), we instructed participants to alter only that aspect of their behavior. Day 2-A significant effect of product use habit was observed on Day 2, the first day none of the subjects applied product (F 2,16 = 3.9, p < 0.046; Table 2). Subsequent tests performed using Tukey HSD (Honestly Significant Difference) indicated that antiperspirant users who ceased using product initially had significantly lower abundances of bacterial colonies on culture plates than deodorant users who ceased using product (p < 0.05). Significant pairwise differences were not observed between antiperspirant users and subjects using no product, nor between either product-use group and subjects who use no product.
Days 1-6-With the disuse of antiperspirants and deodorants, bacterial abundance significantly increased during the course of our experiment (F 5,55 = 4.92, p = 0.001) ( Table 2). Subsequent tests performed using Tukey HSD indicated that abundances on Day 5 were significantly higher than on Day 1 (p < 0.05) and Day 2 (p < 0.05). Abundances on Day 4 were significantly higher than on Day 1 (p < 0.05). This increase over time was independent of product use, gender, or any interactions among these variables.
Sequence-based results
To understand the association between long-term use of antiperspirant or deodorant on more complete armpit bacterial communities (rather than just the abundance of culturable taxa), we analyzed the richness (i.e., OTU diversity) and composition of armpit microbes detected via sequencing of 16S rDNA in participants sampled on two days: early (Day 3 of the experiment, the second day of continued product disuse) and late (Day 6, the fifth day of continued product disuse). On both of these days, the only differences we expected were those due to long-term product use, which might occur due to differences in who chooses to use deodorant or antiperspirant (with individuals with more odorous microbial assemblages perhaps more likely to use product) or due to the product use itself. If the former, we would expect individuals with more product use to tend to be the same individuals with more slow-growing odor producing bacteria such as corynebacteria. If the latter, then we expected the reverse.
Richness
Before rarefaction, the pyrosequencing output yielded 133,098 reads that passed the quality screens of the 454 platform and QIIME filtering. After rarefaction to 1,000 reads per sample we observed a total of 106 OTUs of bacteria and Archaea in armpits of the 17 individuals in our study, with an average richness of 22 OTUs per person. Because Archaea were represented by just two OTUs found on the same person (Candidatus nitrososphaera and Halococcus), we hereafter focus on bacterial results.
Samples of bacteria from regular antiperspirant users, two and five days after stopping underarm product use, were more diverse (mean number of OTUs ± SD = 31.2 ± 24.4) than those of deodorant users (mean ± SD = 10.7 ± 6.2), or users of no product (mean ± SD = 20.5 ± 13.4) (ANOVA, F 2,14 = 3.91, p = 0.045, Fig. 1, Tables S1 and S3). Subsequent Tukey HSD tests supported our finding that bacterial communities of antiperspirant users were significantly richer than those of deodorant users (p < 0.05). No significant differences were observed between bacterial richness of either group of product-users and users of no product. Neither a significant effect of day nor an interaction was observed. The number of OTUs was not correlated with the abundance of culturable bacteria (r = −0.37, p > 0.05).
Composition
The composition of bacteria was strongly associated with underarm product use (PerMANOVA, P Treatment = 0.0001, Fig. 2), but not sampling period (where compositional differences were measured as a function of the relative abundance of taxa based on read number). The five bacterial OTUs that contributed most to differences between each pair of product use groups based on a SIMPER analysis are shown in Table 1. These results were largely consistent with those from a Metastats analysis (White, Nagarajan & Pop, 2009), which indicated some OTUs were significantly more abundant in specific product use groups (Table S4). Overall, the bacteria that contributed the most to differences between microbial assemblages among product use groups were an OTU of Staphylococcaceae (although at the L6 level this OTU was classified as ''Staphylococcaceae_other,'' we expect it almost certainly is a Staphylococcus) and an OTU of Corynebacterium. The common OTU of Staphylococcaceae was reduced in participants who did not use underarm products compared to either deodorant users (who had >186% more of the Staphylococcaceae OTU; Table 1) or antiperspirant users (who had >181% more of the Staphylococcaceae OTU; Table 1). Conversely, the common OTU of Corynebacterium was most common in participants who did not use underarm products; they had >109% Figure 1 Mean composition and richness of bacterial OTUs for all three product user types, combined OTU data from two and five days after stopping product use. Bacteria with greater than 10 sequence reads across all users in each category are shown. The top three bacterial OTUs are shown; a full list is available in Table S1. Antiperspirant users have much richer armpits (22% other bacteria versus 5 % for deodorant users and 9% for no product users). At the L6 level of OTU assignment, the OTU for the highly abundant Staphylococcaceae was ''Staphylococcaceae_other'' indicating that the genus was unassigned. We refer to this OTU for simplicity throughout the remainder of the figures and text as Staphylococcaceae, but that this does represent one group within Staphylococcaceae and does not denote all identified OTUs in this family. more Corynebacterium than participants who regularly used deodorant and >335% more Corynebacterium than those who used antiperspirant (Table 1). We examined the evenness of armpit microbes (see Supplemental Information), but there were no significant effects of sampling time, our treatments, or their interaction on this metric of community structure.
To further examine patterns in the abundances of Staphylococcaceae and Corynebacterium, mixed model ANOVAs were performed to test for day and gender effects, and interactive effects of these variables with product use on each of these two taxa, both of which were important in our analyses, but are also known to be functionally important armpit taxa. An effect of product was observed (Fig. 3A, F 2,11 = 8.36, p = 0.006), such that the abundance of the OTU of Staphylococcaceae was lower in participants who did not use underarm products compared to users of antiperspirant (Tukey HSD, p = 0.005) or deodorant (Tukey HSD, p = 0.007), as expected based on results of the composition analysis (see mean abundance values in Table 1). Conversely, users of no product had significantly higher abundances of Corynebacterium than users of antiperspirant (Tukey HSD, p < 0.001) or deodorant (Tukey HSD, p = 0.006) (Fig. 3B, F (2,11) = 16.56, p < 0.001). Corynebacterium abundance tends to be positively associated with the strength of body odors (Taylor et al., 2003). This pattern is the opposite of what we would expect if the bacteria in the armpits of product users are different from those of non-product users because product users are often individuals who have more odorous biotas (Harker et al., Figure 2 Non-metric multidimensional scaling plot of armpit microbes based upon rarefaction using 1,000 sequence reads. Small symbols represent individuals from each treatment group and large symbols represent group centroids ±1SE.
2014).
No other significant main or interactive effects were observed, including no effect of day on abundances of Staphylococcaceae or Corynebacterium. Whereas Callewaert et al. (2013) found that females tended to be dominated by Staphylococcus spp., and males by Corynebacterium species, we observed no effect of gender on abundances of these two bacterial lineages.
As a result of the differential associations of these two bacterial taxa to product use, the abundances of Staphylococcaceae and Corynebacterium were strongly negatively correlated to each other across all individuals ( Fig. 4; Pearson Rank Correlation: r = −0.697, p < 0.0001).
DISCUSSION
Overall, we found an initial negative effect of antiperspirant, but not deodorant, on bacterial abundance using a traditional culture-based approach. After one day of ceasing product use, antiperspirant users had fewer colonies of culturable bacteria than deodorant users or users of no product. Colony abundance increased, particularly across Days 2-5, with continued product disuse. When all participants began to use antiperspirant, bacterial Mean abundances of (A) Staphylococcaceae and (B) Corynebacterium of participants who regularly used antiperspirant, deodorant or no underarm products based upon sequence data. Underarm product use significantly affected the abundance of both Staphylococcaceae and Corynebacterium (2-way ANOVA: p < 0.0001 for both microbes). However, neither sampling period nor its interaction with product use significantly affected either microbe (2-way ANOVA: p > 0.05).
Figure 4 Relationship between the abundances of Staphylococceae and
Corynebacterium across all individuals. Spearman rank correlation: r = −0.697, p < 0.0001. counts declined. Together these results demonstrate, as expected, that antiperspirants are capable of strongly reducing the biomass of the armpit microbial community, largely independent of the historic product use of individuals. In short, antiperspirant appears to have a clear negative effect on bacterial abundance, one that can be detected in individuals using antiperspirant and that can be produced experimentally. The effect of deodorant on bacterial abundance is more modest, if present at all. These results were not unexpected, in light of two key differences between deodorants and antiperspirants: (1) many deodorants are ethanol-based and likely more water soluble and easier to wash away than antiperspirants; and (2) antiperspirants contain aluminum-based salts that reduce sweat by forming precipitates that physically block sweat glands (Benohanian, 2001) and thus may reduce resources necessary for the growth of microbial communities.
A key question though, given the relative difficulty of culturing many axillary bacteria, particularly those that are slow-growing, is the extent to which changes in the abundance of easily cultured bacteria in response to a short-term experiment match the differences in the entire assemblage, when evaluated using more comprehensive sequencing approaches, particularly differences seen in association with long-term use. Here, several practical challenges exist. Long term experiments on deodorant and antiperspirant, experiments conducted over years, require very committed participants. They are also, however, ethically questionable if prolonged use of deodorants and antiperspirants can have persistent effects on microbial assemblages that in turn, affect health and well-being. In addition, it has proven very challenging to isolate DNA from participants actively using underarm products. Our approach to dealing with this challenge was to use sequence-based approaches to consider the assemblages of microbes in armpits immediately after product use was ceased (as a measure of long-term differences among individuals differing in product use, with only a short period for any shift post disuse). In addition, we considered samples from several days later, once some shift might have been able to occur. These sequence-based data are largely correlational rather than experimental and yet allow strong inference when coupled with experimental data on easily culturable microbes.
Based on sequencing of 16S rDNA, long-term antiperspirant users tended to have more bacterial OTUs in their armpits (multiple days after ceasing product use) than did long-term deodorant users (after disuse of product; Fig. 1). We expected that application of underarm products would negatively affect dominant species, thereby creating more opportunities for rare species to become established. However, we did not observe this effect in deodorant users, who actually had fewer species of bacteria in their armpits compared to armpits of participants who use no product (Fig. 1). Our findings are consistent with those of Callewaert et al. (2014). In line with Callewaert et al.'s (2014) comparison of individuals, we observed a larger change (i.e., increased community richness) in the armpit microbial community when regular product users stopped wearing and then resumed use of antiperspirants, compared to those asked to stop then resume use of deodorants. As such, we expect that our results represent a general effect of antiperspirant use. We can only speculate as to why effects of stopping long-term deodorant and antiperspirant use might have such disparate effects, though note that the particular antiperspirant products (i.e., brands) our participants reported to us all contain aluminum salts. These compounds may alter the underarm habitat (in a manner that deodorants do not), and provide a selective advantage to bacteria not historically common in the human armpit habitat. Based on our study, this underarm habitat alteration lasts multiple days after stopping product use (see Fig. 1, which is based on data from days 2 and 5). The highly abundant microbes identified here compare reasonably well to other studies analyzing moist areas of human skin showing high abundance of Staphylococcaceae and Corynebacteriaceae, among others (see Grice & Segre, 2011 for a review).
The composition of armpit bacterial communities of both antiperspirant and deodorant users was associated with differences in abundances of the two most abundant bacterial taxa, an OTU of Staphylococcaceae and an OTU of Corynebacterium, relative to users of no product. The Staphylococcaceae OTU was the most dominant bacterial group in participants ceasing antiperspirant and deodorant use, followed by the Corynebacterium OTU, whereas this dominance order was reversed among users of no product. In our sequence-based study, we cannot preclude the possibility that individuals who use deodorant and antiperspirant tend to have non-random assemblages of armpit bacteria. But we would expect that, if anything, such individuals would tend to have more odoriferous assemblages of microbes. Armpit odor is largely associated with Corynebacterium, such that we would then expect more Corynebacterium in product users: we found the opposite.
Unlike many taxa on the body, these two taxa have been relatively well characterized with regard to their biology. Species of Corynebacterium are associated with the dominant odors of the armpits and individuals with more Corynebacterium are likely to have stronger body odor (Taylor et al., 2003). Ceasing the use of deodorant and antiperspirant was associated with lower levels of Corynebacterium, in line with expectations, given that companies that sell underarm products aim to reduce body odor through reduction in overall bacterial counts.
We recognize that two additional considerations may have affected our results and those of other studies of armpits. First, because as much as 90% of the bacterial OTUs identified through DNA-sequence based surveys such as ours are bacterial taxa that typically cannot be cultured under standard laboratory conditions, we chose to culture bacteria to inform when to conduct our sequence based analyses (i.e., to determine if residual product impaired colony growth), and to make general comparisons across product use groups. As such, we used standard LB plates grown under aerobic conditions, which like all media, only allow the culturing of a subset of lineages. Although this did not affect the overall conclusions of our sequence-based results, this may have affected the abundances, in that perhaps those bacteria that were abundant in the armpits of non-product users (e.g., Corynebacterium) were not those easily grown on LB agar. However, this does not account for the increased abundance of culturable bacteria in non-product users from days 1 to 4, which is either due to chance and small sample size, or some systemic change that applied to all of our participants.
Second, in comparing the armpit communities of product-users vs. non-product users, we expect that non-product users began our study with more stable armpit communities than those of product-users who recently ceased using product, a standard feature of press experiments, which are common in ecology. Press experiments are designed to understand whether the application of some treatment and then its removal have similar effects (powerful evidence for the influence of the treatment). However, all press experiments provide direct evidence about the experimental effect for the time interval of the study. Our focus was on eight days, sufficient time for many generations of bacteria. It is very possible that were our experiment shorter or longer that our results might have been different. The armpit is a dynamic system and future studies might usefully follow-up with longer term experiments, though only after careful consideration of the ethics of such experiments given that the bacteria whose composition is altered by deodorant and antiperspirant are of direct health consequences (Christensen & Brüggemann, 2014).
A larger sample size would allow us to test hygiene effects of washing frequency and soap type (as these may potentially disturb armpit communities, even of non-product users) as well as additional demographic factors, such as age and gender. This latter point will be informative as gender differences between Staphylococcus and Corynebacterium have been noted in other studies (Callewaert et al., 2013) and will be informative to tease apart gender biases from our product use categories.
CONCLUSIONS
Although it has long been recognized that skin bacterial composition varies strongly among individuals, accounting for such variation has been a challenge, one that has led some authors to suggest that the composition of the skin biome might simply be stochastic, a function of chance colonizations and unpredictable dynamics. Here, we find that the composition of the armpit microbiome is highly predictable, being dominated by Staphylococcaceae and Corynebacterium, and strongly influenced by product use. Species of the Staphylococcaceae include beneficial symbionts (Rosenthal et al., 2011;Christensen & Brüggemann, 2014) but also dangerous pathogens (Otto, 2009;Ryu et al., 2014). It is noteworthy in this regard that the armpit is a common site for pathogenic MRSA infections in athletes (Cohen, 2008). However, we cannot discern which of these taxa are being favored with product use based on our data.
The broader health consequences of antiperspirant and deodorant use are not well studied. Although it has been suggested that deodorant and/or antiperspirant use is associated with incidence or age of breast cancer diagnosis (McGrath, 2003), support for this association is equivocal at best (Hardefeldt, Edirimanne & Eslick, 2013). Whether antiperspirant or deodorant tends to favor less beneficial or even pathogenic bacterial species does not seem to have ever been considered. Recent work indicates that the microbial community structure of the skin, including its commensal/symbiotic residents, exerts significant influence on human health and disease, particularly in the emergence of pathogenic strains of Staphylococcus aureus, S. epidermidis, and Propionibacterium acnes (Otto, 2009;Rosenthal et al., 2011;Christensen & Brüggemann, 2014). Rosenthal et al. (2011) hypothesized that the skin microbiome may be ''an antibiotic resistance reservoir,'' as has been shown to be the case with the human gut microbiome (Sommer, Dantas & Church, 2009). Our work clearly demonstrates that antiperspirant use strikingly alters armpit bacterial communities, making them more species rich. Because antiperspirants only came into use within the last century, we presume that the species of bacteria they favor are not those historically common in the human armpit. Whether these species may interfere with the function of beneficial skin symbionts, contribute antibiotic resistance genes, prove benign, or perhaps even confer beneficial effects to human health remains an intriguing avenue for further study. | 2018-05-08T17:59:17.365Z | 0001-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "d63e7704b053ec46f8fb4fd60a7e3839431b79b0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.1605",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d63e7704b053ec46f8fb4fd60a7e3839431b79b0",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
255991374 | pes2o/s2orc | v3-fos-license | Socio-demographic determinants of Toxoplasma gondii seroprevalence in migrant workers of Peninsular Malaysia
The number of migrants working in Malaysia has increased sharply since the 1970’s and there is concern that infectious diseases endemic in other (e.g. neighbouring) countries may be inadvertently imported. Compulsory medical screening prior to entering the workforce does not include parasitic infections such as toxoplasmosis. Therefore, this study aimed to evaluate the seroprevalence of T. gondii infection among migrant workers in Peninsular Malaysia by means of serosurveys conducted on a voluntary basis among low-skilled and semi-skilled workers from five working sectors, namely, manufacturing, food service, agriculture and plantation, construction and domestic work. A total of 484 migrant workers originating from rural locations in neighbouring countries, namely, Indonesia (n = 247, 51.0%), Nepal (n = 99, 20.5%), Bangladesh (n = 72, 14.9%), India (n = 52, 10.7%) and Myanmar (n = 14, 2.9%) were included in this study. The overall seroprevalence of T. gondii was 57.4% (n = 278; 95% CI: 52.7–61.8%) with 52.9% (n = 256; 95% CI: 48.4–57.2%) seropositive for anti-Toxoplasma IgG only, 0.8% (n = 4; 95% CI: 0.2–1.7%) seropositive for anti-Toxoplasma IgM only and 3.7% (n = 18; 95% CI: 2.1–5.4%) seropositive with both IgG and IgM antibodies. All positive samples with both IgG and IgM antibodies showed high avidity (> 40%), suggesting latent infection. Age (being older than 45 years), Nepalese nationality, manufacturing occupation, and being a newcomer in Malaysia (excepting domestic work) were positively and statistically significantly associated with seroprevalence (P < 0.05). The results of this study suggest that better promotion of knowledge about parasite transmission is required for both migrant workers and permanent residents in Malaysia. Efforts should be made to encourage improved personal hygiene before consumption of food and fluids, thorough cooking of meat and better disposal of feline excreta from domestic pets.
Background
Toxoplasma gondii is one of the most common protozoan parasites affecting up to one-third of the world's population [1][2][3]. Human infection may occur via ingestion of food or water contaminated with oocysts shed in the faeces of infected cats; consumption of undercooked or raw meat; consumption of raw oysters, clams, or mussels containing tissue cysts [4][5][6][7][8]; exposure to contaminated soil through activities such as gardening or children playing in sandpits [9] and vertical transmission from mother to foetus [10,11].
Toxoplasmosis in immunocompromised people may causes damage to the brain, eyes, or other organs and is associated with severe acute infection or with reactivation of past infection. Infections acquired during pregnancy may cause severe damage to the foetus [11]. In immunocompromised patients, reactivation of latent infection can cause life-threatening encephalitis [2,12].
In recent years, there have been also many attempts to link toxoplasmosis with schizophrenia and other mental health problems (such as bipolar disorder) [13][14][15]. Toxoplasma gondii has emerged as a prime candidate when investigating the relationship between infectious agents and schizophrenia; some individuals with adult toxoplasmosis develop psychotic symptoms [13].
The standard method for diagnosis is through serological testing, based on the detection of Toxoplasmaspecific immunoglobulin IgG and IgM antibodies in serum and this test is routinely implemented in many parts of the world [16][17][18]. The detection methods previously employed in Malaysia have included the indirect hemagglutination (IHA) test, Sabin Feldman dye test, indirect fluorescent antibody test [19] and the enzymelinked immunosorbent assay (ELISA) [20].
Over the years, the economy in Malaysia has transformed into an emerging multi-sector economy and since the 1970s, its economic vigour has been facilitated largely by imported migrant workers. The number of migrant workers arriving in Malaysia from neighbouring countries has grown exponentially and there is concern that diseases endemic in their countries may be inadvertently imported [21], despite compulsory medical screening prior to entering the workforce in Malaysia. However, pre-employment screening does not currently include screening for the presence of most parasitic infections, including toxoplasmosis. The infection could have substantial public health implications with regard to the productivity of the migrant labour force and its contribution to the Malaysian economy and well-being.
Previous studies have presented a mixed picture on the seroprevalence of T. gondi infection among migrant compared to indigenous workers in Malaysia. Chan et al. [21] reported up to 42% (138/336) positive for specific IgG while twenty mainly Indonesian plantation workers and workers in detention camps (6%) were positive with IgM [21]. Chan et al. [22] also noted the high prevalence of T. gondii infection among local plantation workers (IgG: n = 89, 44.9%) compared to migrants (n = 171, 34.1%). However, there was no statistically significant difference in the prevalence of raised IgM between migrant workers (n = 26, 5.2%) and locals (n = 17, 8.6%). Amal et al. [23] reported a lower rate of raised specific IgG among workers (n = 16, 18.8%) from the Indian subcontinent compared to locals (n = 89, 44.9%) from the same plantation and detention camp. Similarly, another study also showed that just over a third (34.1%, 171/501) of migrant plantation workers and individuals in detention camps were IgG positive and 5.2% (26/501) were IgM positive, with the highest infection rate among Nepalese workers (46.2%) compared to other ethnic groups [24].
The current study was a component of a broader project aiming to assess the range and extent of parasitic infections brought into Malaysia by the migrant worker population. The study is motivated by the need to assess the health status of migrant workers originating from countries with low socioeconomic backgrounds, living in deprived environments with poor sanitation and low hygiene practices [25]. Here we report on seroprevalence of T. gondii among migrant workers in Malaysia and identify key factors associated with this infection.
Study population and sample collection
This study was carried out from September 2014 to August 2015 among informed, consenting low-skilled and semi-skilled workers from five working sectors in Peninsular Malaysia, namely: manufacturing; service; agriculture and plantation; construction, and domestic work. Questionnaires were distributed to participants to gather relevant information related to the study. An individual clinical interview with a questionnaire was conducted to collect information on sociodemographic data, migration history, environmental health, life-style habits (consumption of raw meat and vegetables), recent illness and occupational health and safety. The interview process was performed through an interpreter for those migrant workers who had difficulty understanding the Malay (national) and English languages. All participants were fully informed of the nature of the study and completed the consent forms.
After consent was obtained and the questionnaire answered, approximately 5 ml of venous blood was drawn into a plain tube (without anticoagulant) by trained medical assistants and nurses using disposable syringes and needles. The blood samples were transported to the Parasitology Laboratory, Institute of Biological Science, Faculty of Science, University of Malaya. Blood samples were spun at 1,500× g for 10 min and the serum samples were kept in -20°C until use.
Detection of immunoglobulin G and M antibodies to T. gondii
Screening for anti-T. gondii antibodies was performed using enzyme-linked immunosorbent assay (ELISA) commercial kits for immunoglobulin G (IgG) and M (IgM) (Trinity Biotech Captia TM , New York, USA) in accordance with the manufacturer's instructions. For the IgG assay, positive results were defined as ≥ 1.23 IU/ml, indicating latent or pre-existing Toxoplasma infection. Positive results for IgM assays were also defined as ≥ 1.23 IU/ml, indicating recent infection. All samples that were both IgG-positive and IgM-positive were tested using an IgG avidity assay (IgG; NovaLisa, Dietzenbach, Germany) according to the manufacturer's instructions. Toxoplasma antibodies with high avidity (> 40%) indicate latent infection, while Toxoplasma antibodies with low avidity (≤40%) indicate a probable acute or recent infection.
Data analysis
Prevalence estimates (percentage of participants infected) are shown with 95% confidence intervals (CI) calculated using the method described by Rohlf & Sokal (1995) [26]. Prevalence was analyzed using maximum likelihood techniques based on log linear analysis of contingency tables using the software package SPSS (Version 22), in three steps. First, full factorial models were fitted including the following 'intrinsic' factors: sex (2 levels, males and females), age (5 age classes comprising those < 25 years old, 25-34 years old, 35-44 years old, 45-54 years old and those > 54 years), nationality (5 countries, Bangladesh, India, Indonesia, Myanmar and Nepal) and immune status, which was considered as a binary factor (presence/absence of anti-Toxoplasma antibodies). For each level of analysis in turn, beginning with the most complex model, involving all possible main effects and interactions, those combinations that did not contribute significantly to explaining variation in the data were eliminated in a stepwise fashion beginning with the highest-level interaction (backward selection procedure in SPSS). A minimum sufficient model was then obtained, for which the likelihood ratio of the chi-square (χ 2 ) statistic was not significant, indicating that the model was sufficient in explaining the data. The importance of each term (i.e. interactions involving infection) in the final model was assessed by the probability that its exclusion would alter the model significantly and those values relating to interactions that included presence/absence of infection-specific antibodies (as described above) are given in the text.
Models were then fitted including the following 'extrinsic' factors: employment sector (5 sectors: construction, manufacture, plantation, food service and domestic), years of residence in Malaysia (2 categories: less than 1 year (year 1) and more than 1 year (year 2)), accommodation (3 types: hostel, construction site and own/rented home) and education (4 levels: primary school, secondary school, university and no formal schooling) and infection. Finally, in a third step, models were fitted comprising only the intrinsic and extrinsic factors that had been found to be statistically significantly associated with infection status as measured by presence of IgG/IgM.
Where relevant, we also fitted in turn models with just each factor and infection status, in order to resolve/clarify complex interactions that could not be simplified by the backward selection procedure. We have also provided in the tables the probability values from these models.
Intrinsic factors associated with the seroprevalence of T. gondii infections Seropositivity of T. gondii was analysed statistically in relation to sociodemographic factors. In the minimum sufficient model identified by the backwards stepwise selection procedure that included sex, age and nationality, only age (χ 2 = 11.989, df = 4, P = 0.017) and nationality (χ 2 = 32.275, df = 4, P ≤ 0.001) were found to be statistically significantly associated with seropositivity for T. gondii IgG, independently ( Table 2). Analyses of anti-Toxoplasma IgM and seropositivity based on a combination of both IgG and IgM antibodies, did not find any of the three intrinsic factors as statistically significantly affecting seroprevalence (Table 2).
gondii infections
Of the four extrinsic factors considered (employment sectors, years of residence in Malaysia, type of accommodation and level of education), only two factors were found to be statistically significantly associated with seropositivity of anti-Toxoplasma IgG in models that only included each factor in turn with presence of Toxoplasma IgG, i.e. employment sector (χ 2 = 21.306, df = 4, P ≤ 0.001) and years of residence in Malaysia (χ 2 = 8.294, df = 1, P = 0.004) ( Table 2). Seroprevalence was significantly and positively associated with those employed in the manufacturing industry and those recently arrived in Malaysia (the latter with the exception of domestic workers, in whom the converse negative association with years of employment was observed). Finally, in a multifactorial model in which we fitted only the significant effects from the analyses of intrinsic and extrinsic factors (age, country of origin, years of residence and employment sector with Toxoplasma IgG seropositivity), three significant interactions were found, the strongest being employment sector interacting with years of residence and Toxoplasma IgG (χ 2 = 13.478, df = 4, P = 0.009). In four cases (employment sectors construction, manufacturing, plantation and the service industry), prevalence was higher in year 1 (72.7, 79.5, 48.6 .6%, respectively), indicating a reduction in infection between the two years, whilst for those in the domestic sector, prevalence increased from 41.7% in year 1 to 61.0% in year 2. The other two interactions, years of residence interacting with age and Toxoplasma IgG (χ 2 4 = 9.603, P = 0.048) and years of residence interacting with nationality and Toxoplasma IgG (χ 2 4 = 9.628, P = 0.047), were only marginally significant and thus were not explored further.
Models in which extrinsic factors were fitted with seropositivity of anti-Toxoplasma IgM, either alone or in combination with IgG did not identify any of factors as statistically significantly (Table 2).
Discussion
This study investigated the status of T. gondii infection among migrant workers in Malaysia using standard commercial kits that detect anti-Toxoplasma IgG and IgM antibodies. The results showed that more than half of the workers had latent infection (53.0%), indicative of previous exposure to T. gondi. The high prevalence of latent T. gondii infection among these workers suggests that most of these infections were probably acquired in their home countries, where toxoplasmosis is known to be prevalent [27][28][29].
In the present study, two of the factors considered as intrinsic to the sampled individuals showed highly significant associations with T. gondii infection. The first variable was age class, with prevalence being higher among workers older than 45 years (74.3 to 76.9%) compared with younger workers (51.4 to 59.2%). This is in agreement with previous studies [44,[50][51][52][53] where infections acquired increased with age [32,39]. A recent study among the indigenous communities of Malaysia (Orang Asli) showed significantly higher seroprevalence (P ≤ 0.001) among those aged 12 years and older (52.6%), compared to younger participants (31.2%) [44]. In the current study, prevalence was very similar in both sexes.
The second significant factor affecting seroprevalence was the migrant workers' countries of origin and thought to be related to behavioural and cultural practices such as unintentional ingestion of oocysts shed in cats' faeces and/or consumption of undercooked or raw contaminated meat [44,54,55]. Seroprevalence (by IgG) was highest among workers from Nepal (77.8%), followed by Indonesia (58.3%), Bangladesh (45.8%), India (38.5%) and Myanmar (28.6%), in agreement with a study in 2008 [24]. The strength of country of origin as a significant explanatory factor of T. gondii seroprevalence is most likely due to a combination of dietary habits, behavioural risks, environmental conditions, socioeconomic status and poor personal hygiene practice [24]. High prevalence of infection is common among ethnic groups in Nepal due to their habitual ingestion of minced raw meat or insufficiently cooked meat, both of which may harbour tissue cysts of the parasite [28,29]. Similarly, in Indonesia, T. gondi infection is also considered to be a food-borne disease. Gandahusada (1991) [27] linked infection in Indonesia to the presence of domestic animals and eating raw or partially cooked meat with seroprevalence ranging between 2-63% in humans, between 35-73% in cats, 75% in dogs, between 11-36% in pigs, between 11-61% in goats, and less than 10% in cows. In the present study, all the workers originated from rural areas in their respective countries where infections are highly prevalent especially among poor and deprived communities. In such communities, domestic and feral cats are the most likely sources of environmental contamination, leading to direct infections in humans or indirectly through tissue-cyst bearing domestic animals. Significant correlations between consumption of unboiled water and T. gondii seropositivity have also been noted in a few studies, particularly among disadvantaged and indigenous communities living in rural and remote areas, with toxoplasmosis being considered a water-borne disease in these places [14,[56][57][58]. Contamination of water reservoirs with cat faeces [56] and collection of water from shallow wells located on farms where infected cats are present [57,58] constitute possible sources of human infection with T. gondii oocysts.
We found that infections were significantly higher among workers from the manufacturing sector (76.3%) compared to workers in other sectors. Rai et al. [29] highlighted that the nature of one's occupation increases the risk of acquiring T. gondii infection especially for those engaged in agricultural activities. However, the present analysis revealed the lowest prevalence of infection (45.1%) amongst plantation workers. This latter result may be biased to some extent as working sectors were commonly dominated by a particular nationality. In Malaysia, Nepalese workers dominated the manufacturing sector (81.7% were Nepalese) and the high prevalence of infection in the manufacturing sector (74.2%) was largely attributable to the Nepalese (74.7%).
Workers with an employment history of less than one year or newly arrived workers were most frequently infected with T. gondii except for those in the domestic sector, indicating that acquisition of infection for most immigrant workers was most likely to be from their country of origin [27][28][29]. The reason for the increase in prevalence between year 1 and 2 among those in the domestic sector is not clear, but may be related to closer contact with domestic cats, which are commonly kept as pets in Malaysia. The majority of domestic workers in this study lived in their own or rented houses (98.1%) with pets and/or likely to clean cat litter trays as part of their domestic duties. By contrast, workers in other employment sectors mostly live in hostels where they are not allowed to keep pets.
Toxoplasmosis has been associated with the incidence (and prevalence) of schizophrenia and other human affective and psychotic disorders. Antipsychotic drugs known to be effective in the treatment of schizophrenia also inhibit some parasites, predominantly T. gondii [13,59]. However, access to health care in general and mental health care in particular, is likely to be very low among poor and marginalized communities, with migrant workers less likely to seek care due to stigmatization and fear of job loss.
Conclusions
In conclusion, high seroprevalence of T. gondii among migrant workers in Peninsular Malaysia was found to be positively and statistically significantly associated with age (> 45 years), nationality (Nepalese), employment sector (manufacturing) and shorter duration of residence in Malaysia (with the exception of domestic workers, in whom the converse association was shown). Therefore, our results call for the public health authorities in Malaysia to include a health education programme not only specifically for migrant workers (who have pivotally fueled the Malaysian economy) but also among the general public. This will help to increase public awareness of toxoplasmosis and especially about the importance of cats and the consumption of contaminated meat and water as major potential sources of infection [14,15]. | 2023-01-19T22:12:34.036Z | 2017-05-15T00:00:00.000 | {
"year": 2017,
"sha1": "2461b35210a5f5314dfa011311704aec25d8d4b0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13071-017-2167-8",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2461b35210a5f5314dfa011311704aec25d8d4b0",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
221788979 | pes2o/s2orc | v3-fos-license | A New Approach for Detecting Sleep Apnea Using a Contactless Bed Sensor: Comparison Study
Background At present, there is an increased demand for accurate and personalized patient monitoring because of the various challenges facing health care systems. For instance, rising costs and lack of physicians are two serious problems affecting the patient’s care. Nonintrusive monitoring of vital signs is a potential solution to close current gaps in patient monitoring. As an example, bed-embedded ballistocardiogram (BCG) sensors can help physicians identify cardiac arrhythmia and obstructive sleep apnea (OSA) nonintrusively without interfering with the patient’s everyday activities. Detecting OSA using BCG sensors is gaining popularity among researchers because of its simple installation and accessibility, that is, their nonwearable nature. In the field of nonintrusive vital sign monitoring, a microbend fiber optic sensor (MFOS), among other sensors, has proven to be suitable. Nevertheless, few studies have examined apnea detection. Objective This study aims to assess the capabilities of an MFOS for nonintrusive vital signs and sleep apnea detection during an in-lab sleep study. Data were collected from patients with sleep apnea in the sleep laboratory at Khoo Teck Puat Hospital. Methods In total, 10 participants underwent full polysomnography (PSG), and the MFOS was placed under the patient’s mattress for BCG data collection. The apneic event detection algorithm was evaluated against the manually scored events obtained from the PSG study on a minute-by-minute basis. Furthermore, normalized mean absolute error (NMAE), normalized root mean square error (NRMSE), and mean absolute percentage error (MAPE) were employed to evaluate the sensor capabilities for vital sign detection, comprising heart rate (HR) and respiratory rate (RR). Vital signs were evaluated based on a 30-second time window, with an overlap of 15 seconds. In this study, electrocardiogram and thoracic effort signals were used as references to estimate the performance of the proposed vital sign detection algorithms. Results For the 10 patients recruited for the study, the proposed system achieved reasonable results compared with PSG for sleep apnea detection, such as an accuracy of 49.96% (SD 6.39), a sensitivity of 57.07% (SD 12.63), and a specificity of 45.26% (SD 9.51). In addition, the system achieved close results for HR and RR estimation, such as an NMAE of 5.42% (SD 0.57), an NRMSE of 6.54% (SD 0.56), and an MAPE of 5.41% (SD 0.58) for HR, whereas an NMAE of 11.42% (SD 2.62), an NRMSE of 13.85% (SD 2.78), and an MAPE of 11.60% (SD 2.84) for RR. Conclusions Overall, the recommended system produced reasonably good results for apneic event detection, considering the fact that we are using a single-channel BCG sensor. Conversely, satisfactory results were obtained for vital sign detection when compared with the PSG outcomes. These results provide preliminary support for the potential use of the MFOS for sleep apnea detection.
Data Analysis
For each patient, encrypted binary files were first decrypted using proprietary software and stored in comma-separated values (CSV) file format. Afterward, all CSV files were concatenated into a single CSV file representing a one-night data recording. Each CSV file contained seven data columns, i.e., Unix timestamp, amplified raw data (1e7 x electric current), filtered BCG signal, ambient sound, ambient temperature, ambient light, unamplified raw data (1e6 x electric current), and the power supplied to the light source. We only considered the first and the seventh data columns in our data analysis, i.e., Unix timestamp and unamplified raw data. Upon binding data chunks for each patient, we synchronized acquired raw data according to the start and end time of the PSG study. Essentially, the unamplified raw data represent a mixture of two signals (i.e., BCG and respiratory effort signals), in addition to noise data or motion artifacts that represent frequent body movements.
Vital signs detection
Chebyshev Type I bandpass filter was applied to artifact-free data to obtain BCG and respiratory signals. The cutoff frequencies were selected such as (2.5Hz -5Hz, 0.5dB) and (0.01Hz -0.4Hz, 0.5dB), respectively. Several attempts in literature were made to compute the heart rate from BCG signals. These attempts include time-domain approaches, frequency-domain approaches, wavelet analysis, and clustering-based approaches [4]. A recent comparative study performed by Suliman et al. [4] concluded that the wavelet analysis-based approach proposed by Sadek et al. [5] was one of the two-high performing methods in terms of average peak detection rate, average false alarm rate, and average mean absolute error between true and predicted peaks. This particular task is challenging because the J-peaks of the BCG signal (equivalent to R-peaks of the ECG signal) are not consistent and vary across and between subjects. For Jpeak detection, we used Sadek et al. [5] approach, which utilizes the multiresolution analysis of the "maximal overlap discrete wavelet transform", or a.k.a., MODWT [6]. This method aimed at reducing the BCG signal into smooth and detail time series components by passing the signal through low-pass and high-pass filters and then selecting the component that shows an agreement with the J-peaks. The Biorthogonal wavelet Bior3.9 basis function with level 4 was accommodated for the analysis, while the fourth level smooth coefficient was chosen to represent the cardiac cycles. In the end, J-peaks were traced through a peak detector. The same wavelet basis function was used across all patients recruited in the study. Heart rates were measured using a sliding time window of 30 seconds with an overlap of 15 seconds. The ECG signal was used as a reference to detect interbeat intervals (IBIs). For this purpose, we selected the well-known Pan and Tompkins algorithm owing to its reasonable results [7]. Respiratory rate, on the other hand, can be measured directly from the band passed filtered data via a peak detector. However, before locating breathing cycles, we first removed the nonlinear trend from the signal by subtracting a polynomial fit of the 3 rd order. Respiratory rates were calculated using a sliding time window of 30 seconds with an overlap of 15 seconds. The effort signal obtained from the thoracic belt was used as a reference to detect respiratory cycles. Compared to abdominal effort and airflow (i.e., pressure and thermistor) signals, effort thoracic signal was highly correlated with the one acquired from the optical fiber mat.
Statistical analysis
All data processing and analysis were presented in Python (version 3.7.6) using PyCharm Professional Edition. Graphical illustrations of data analysis and evaluation metrics, including, for example, the Pearson correlation coefficient and bar plots with error bars were produced by python. Seaborn 0.10.0 Python data visualization library was used to create the Pearson correlation coefficient plots, while RStudio version 1.2.5033 (Rstudio Inc) was used to create the Bland-Altman plots. Table 1 shows the error metrics used in our approach along with their mathematical formulas.
Sensitivity
Sensitivity (Sens) is often presented in proportion and describes the probability that a test will yield a positive result if the disorder is present. It is determined as the number of correct positive predictions divided by the total number of positives [8][9][10]. Likewise, it can be described as a recall or true positive rate. In our case, it defines the proportion of correctly identified apneic events.
Specificity
Specificity (Spec) is often presented in proportion and describes the probability that a test will yield a negative result if the disorder is not present. It is calculated as the number of correct negative predictions divided by the total number of negatives [8][9][10]. Likewise, it can be described as a true negative rate. In our case, it defines the proportion of correctly identified non-apneic events.
Accuracy Accuracy (Acc) is often presented in proportion and describes the probability of all instances that are correctly classified. It is calculated as the number of all correct predictions divided by the total number of all instances in the dataset [8][9][10].
Cohen kappa coefficient
The Cohen kappa coefficient, i.e., kappa statistic, is often adopted to measure the inter-annotator agreement. In other words, it presents the percentage of agreement beyond that predicted by chance [11]. In common with the correlation coefficient, it can vary from -1 to 1, where 0 defines the amount of agreement that can be predicted from random chance, and 1 suggests a precise agreement between the raters. According to Cohen [12], we can translate kappa results as follows: "values ≤ 0 as indicative of no agreement and 0.01-0.20 as none to slight, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1.00 as an almost perfect agreement" [13].
Matthews correlation coefficient
Matthews correlation coefficient (MCC) is a special case of the Pearson correlation coefficient, i.e., it is a cross-tabulation method of calculating the Pearson correlation coefficient between true and predicted values. The value of the coefficient varies between -1 and +1. A coefficient of +1 denotes a perfect classification, 0 a coin-tossing classifier and -1 a perfect misclassification. It is also identified as the phi coefficient and the only binary classifier that can yield a high score only when the binary predictor could accurately predict the most positive data instances and most negative data instances [14].
Bland-Altman plot
The Bland-Altman plot [15,16] is a tool to quantify the agreement between two quantitative estimations. This is done by creating limits of agreement (LoA). The LoA are calculated using the mean and standard deviations of the differences between the two measurements [17]. This graphical representation plots the differences between the two measurements on the y-axis and the averages of the two measurements on the x-axis. It is a favored method to measure the agreement between two medical devices because devices are not likely to have the exact agreement. Most importantly, it estimates how close pairs of measurements are as small differences between devices are not likely to influence patient decisions [18]. | 2020-07-30T02:07:38.253Z | 2020-02-17T00:00:00.000 | {
"year": 2020,
"sha1": "fe1fbd59cb50c8b051e7450a87e79ad69b142ad4",
"oa_license": "CCBY",
"oa_url": "https://jmir.org/api/download?alt_name=jmir_v22i9e18297_app1.pdf&filename=f99bc58b3257699a1a4905170ef74523.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b81f3754a9a15f6ac3b2d38e339e015859ce4b95",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254097516 | pes2o/s2orc | v3-fos-license | Cis‐regulatory effect of HPV integration is constrained by host chromatin architecture in cervical cancers
Human papillomavirus (HPV) infections are the primary drivers of cervical cancers, and often HPV DNA gets integrated into the host genome. Although the oncogenic impact of HPV encoded genes is relatively well known, the cis‐regulatory effect of integrated HPV DNA on host chromatin structure and gene regulation remains less understood. We investigated genome‐wide patterns of HPV integrations and associated host gene expression changes in the context of host chromatin states and topologically associating domains (TADs). HPV integrations were significantly enriched in active chromatin regions and depleted in inactive ones. Interestingly, regardless of chromatin state, genomic regions flanking HPV integrations showed transcriptional upregulation. Nevertheless, upregulation (both local and long‐range) was mostly confined to TADs with integration, but not affecting adjacent TADs. Few TADs showed recurrent integrations associated with overexpression of oncogenes within them (e.g. MYC, PVT1, TP63 and ERBB2) regardless of proximity. Hi‐C and 4C‐seq analyses in cervical cancer cell line (HeLa) demonstrated chromatin looping interactions between integrated HPV and MYC/PVT1 regions (~ 500 kb apart), leading to allele‐specific overexpression. Based on these, we propose HPV integrations can trigger multimodal oncogenic activation to promote cancer progression.
Introduction
Cervical cancer is the fourth most common cancer type among women worldwide, with the majority being reported from developing and underdeveloped countries [1].Studies have established that most cervical cancer cases can be attributed to the persistent Human papillomavirus (HPV) infection, particularly the highrisk subtypes such as HPV16 and HPV18 [2,3].HPV contains a circular double-stranded DNA genome of size ~8 kb, and it infects basal epithelial cells in the cervix.During the initial stages of infection, the HPV DNA exists in episomal form.However, over the course of epithelial cell differentiation, proliferation or neoplastic changes, the HPV DNA gets integrated into the host genome.This integration process is likely to occur at the host genomic regions sensitive to DNA strand breaks and share microhomology with the HPV DNA Abbreviations APOT, amplification of papillomavirus oncogene transcripts; CESC, cervical squamous cell carcinoma and endocervical adenocarcinoma; eRNA, enhancer RNA; HPV, human papillomavirus; SE, super enhancer; TAD, topologically associating domain; TCGA, The Cancer Genome Atlas; WGS, whole genome sequencing; WXS, whole exome sequencing.[4,5].Large-scale genome study of cervical tumours has shown that 80% of the tumours with HPV had integration in the host genome [6].
Tumours with HPV DNA integration show overexpression of viral oncogenes such as E6 and E7, likely due to the perturbations or DNA breakpoints in the viral regulatory gene E2 (which controls the expression of E6/E7) or increased stability of the viral transcripts upon fusion with the host genes [3].E6 and E7 proteins are known to interfere with the host p53 and RB pathways respectively, and thus favour cancer cell proliferation by avoiding apoptosis and cell cycle arrest [7,8].Besides our understanding of the oncogenic roles of these viral proteins (E6/E7), efforts to delineate the cis-regulatory effects of HPV integration on the host chromatin structure and gene regulation have been rather limited.
HPV integration in the host genome can be a single or clustered (multiple nearby) event.The latter is often found together with genomic alterations (including amplification, deletion and translocations) in the nearby host regions, likely due to the HPV integration mediated DNA replication and recombination processes [9,10].Besides, HPV integrations are associated with the upregulation of host genes which are either directly affected by the integration or in its immediate vicinity [6,[11][12][13].Furthermore, recent studies using cell lines showed that HPV integration can cause long-range effects in cis through changes in the host chromatin interactions and subsequent gene dysregulation.For example, HPV16 integration in the W12 (human cervical keratinocyte) cell line was shown to alter chromatin interactions (involving both host:host and host:viral DNA), as well as host gene expression in the nearby regions [14].Similarly, in the cervical adenocarcinoma cell line HeLa, the HPV18 DNA integration in chromosome 8 was shown to have long-range chromatin interactions with the promoter region of the MYC oncogene (located approximately 500 kb away in the same chromosome) and was associated with its overexpression [15,16].However, the extent of these long-range chromatin interactions mediated by HPV integrations genome-wide and the associated host gene expression changes are still unexplored in cervical tumours.
Previous studies have shown that the HPV integrations from the cervical tumours and cell lines were enriched in the transcriptionally active open-chromatin regions [17][18][19].However, these findings were based on the HPV integrations derived mostly from transcription-based assays (such as RNA-seq and amplification of papillomavirus oncogene transcripts (APOT)), and thus probably have a bias towards transcriptionally active regions.Hence, a whole-genome DNA-based HPV integration detection approach is required to understand the distribution of HPV integrations across the genome and to study their impact on chromatin structure and gene-regulation.Moreover, a recent Pan Cancer Analysis of Whole Genomes consortium study has also demonstrated the need for DNA-based methods to obtain a comprehensive view of viral association with cancers [20].
To address the aforementioned limitations, we explored the genome-wide HPV integration patterns and their impact on host gene expression in the context of chromatin states and topologically associating domains (TADs).For this, we collated genome-wide HPV integrations in cervical cancers detected using DNA-based approaches and compared it with the chromatin state information from cancer and normal cell lines.We found that the HPV integrations are significantly enriched in active chromatin regions and depleted in inactive chromatin regions, as compared to the expected counts.Interestingly, regardless of the host chromatin state, transcriptional upregulation was observed in the immediate vicinity of the HPV integration regions (up to 10 kb).Further investigation of the long-range effects of HPV integration revealed that the TADs with integration have higher gene expression as compared to samples without integration in the same TAD.More importantly, this difference was not observed in the TADs adjacent to the HPV integrated TADs.Moreover, the recurrent HPV integration analysis at the TAD level revealed both the direct and longrange effect of HPV integration on the expression of cancer-related genes (such as MYC, PVT1, TP63 and ERBB2).Additionally, we used Hi-C and 4C-seq analyses to show that the HPV integration in HeLa cells mediates long-range chromatin interactions with the oncogenes MYC and PVT1, and drives their overexpression in an allele-specific manner.Interestingly, these chromatin interactions were also mostly confined to the same TAD and not extending to the neighbouring TADs.Together, our results suggest the cis-regulatory potential of integrated HPV DNA that drives upregulation of host genes through changes in chromatin interactions (but mostly within the same TAD) in cervical cancer.This underscores that the HPV integration can mediate multiple modes of oncogenic activation and thereby acts as a strong driver conferring selective growth advantage to the cancer cells.
HPV integrations
We collated HPV integrations in cervical cancer patient samples from previous studies (including TCGA-CESC and others) [4,11].This dataset consists of HPV integration identified through genome-wide approaches (whole-genome sequencing [WGS] and HPV capture methods) and exome-wide approaches (whole-exome sequencing [WXS] and RNA-seq based integrations).In total, we got 1324 integrations from 326 samples (see Table S1), after removing five samples which had an extreme number of HPV integrations (above the 99th percentile in the respective methods).We categorised the HPV integrations into two main sets: (a) GW-HPV-int: containing 617 integrations from 212 samples identified through genomewide approaches, and (b) all-HPV-int: containing 1324 integrations from 326 samples, which includes HPV integrations detected from genome-wide, RNA-seq and exome-based approaches (see Table S1).In the latter set, if in any samples, HPV integrations were identified from both WGS and RNA-seq, we merged the overlapping integrations to avoid redundancy.For the analysis shown in Figs 1 and 2, we used GW-HPV-int set whereas for the others (Figs 3 and 4; Fig. S3) we used all-HPV-int set.Only samples from TCGA-CESC were used whenever the expression was being plotted together with the HPV integration.The HPV integration (n = 381 from 95 samples) from small cell cervical cancer [21] was treated as an independent dataset to test the TAD based recurrent analysis (Fig. S4A).
Enrichment analysis
For genome-wide enrichment analysis of HPV integration (shown in Fig. 1), the genomic coordinates of HPV integration sites were extended (500 bp flanking either sides) to a total length of 1 kb size from the centre (such that all HPV integrations have uniform size distribution and to compute the GC content around HPV integration sites).To compute the expected integrations, we randomly sampled an equal number of regions which were of the same length and similar GC content to the observed HPV integrations.For each feature, we calculated the number of observed HPV integrations overlapping with that and compared it against the expected HPV integration counts using the Chi-squared test.The P-values were subjected to the multiple-hypothesis testing correction using Benjamini-Hochberg method (FDR).For ChromHMM annotations, we considered the centre of HPV integrations (and random regions) for the overlap.
Expression analysis
To check the expression of genes and non-coding elements in the immediate vicinity (Fig. 2A,C; Fig. S2A, C), we downloaded the pre-computed normalised total RNA expression values (FPKM) in the 10 kb region around the HPV integration regions from Nguyen et al. [11].These integration regions were defined by merging HPV integrations that were within 10 kb at the sample level (if any, to avoid overlapping biases), and then a 10 kb flanking region was added on both sides to compute the total RNA expression (by considering the normalised expression level of all transcripts that overlapping the genomic region) [11].In the case of enhancer RNA (eRNA) expression, we followed the above steps, to compute the eRNA expression (from Super-enhancers) around the HPV integration regions.In case of TAD level expression analysis (unique TAD and sample combination), we computed the mean expression using all genes (with expression from TCGA-CESC) in the respective TADs.The eRNA expression from super-enhancer regions and gene expression of TCGA-CESC samples were obtained from the TCeA database (https://bioinformatics. mdanderson.org/public-software/tcea/) and gdc portal (https://portal.gdc.cancer.gov/),respectively.
For the gene level expression comparison with respect to HPV integration status (Fig. 4), we used normalised RSEM values.The expression level represented as z-score in Fig. S5 was calculated by using the mean and standard deviation from all the samples with and without integrations (at gene level).For the pathway level enrichment analysis, we computed the single sample GSVA score [28] for each of the 50 hallmark gene sets from MSigDB (http://www.gsea-msigdb.org/gsea/msigdb/).Mann-Whitney U test (one-sided) was used to compare the distribution of GSVA scores in samples with and without integration for each pathway.The P-values were subjected for multiplehypothesis testing using Benjamini-Hochberg approach and those pathways significant under FDR < 5% were shown in Fig. S4B,C.
Allele-specific analysis
For allele-specific expression/tf-binding analysis GATK ASEReadCounter (v4.1.9.0) [29] was used.Only those reads with minimum base quality of 10 and minimum read mapping quality of 20 were used.Also minimum read depth of 8 reads (5 for GRO-seq and TF ChIPseq datasets) at each heterozygous SNPs was used as a cutoff.Only those features which were supported by at least 15 reads at the heterozygous SNP positions in total were further used.For copy number correction, we used read depth from WGS. Binomial test followed by Bonferroni correction was used to detect the significance of allele specific expression/tf-binding with respect to copy number at each feature level from haplotype A.
4C-seq experiment and data analysis
The 4C-seq experiment was done following the protocol mentioned in Farooq et al. [31].The primary digestion was performed with DpnII (NEB) and the secondary digestion was done with NlaIII enzyme (NEB).The primer used for the HPV18 viewpoint in HeLa and the number of reads in each replicate is mentioned in Table S3.4C-ker package [32] was used to analyse the 4C-seq data and the hg19 genome was used as reference.
Hi-C analysis
Hi-C data from HeLa was obtained from ENCODE (https://www.encodeproject.org/experiments/ENCSR693GXU/).A hybrid hg19-HPV18 genome, considering the HPV18 genome as an additional 'chromosome', was constructed (human genome version: GRCh37.75,HPV18 version: NC_001357.1;the HPV18 genome orientation was reversed, as shown in Adey et al. [16]).The hybrid hg19-HPV18 genome was indexed with BWA v0.7.17 'index' mode and SAMTOOLS v1.6 'faidx'.Hi-C reads from each of the replicates were mapped separately to the hybrid genome using BWA (v0.7.17, 'mem' mode, parameters: -t 20 -E 50 -L 0 -v 0).Filtering of reads was done based on mapping parameters (min mapq = 1, samtools view -Sb -q 1 -F 256).After intersecting reads with HINDIII intervals and removing self ligation and duplicate pairs, replicates were pooled together.HICEXPLORER (v2.1.1)was used to obtain the contact list and the contact matrices at 10 kb resolution.In order to jointly normalise HPV18 and human chr8 Hi-C contacts, the interaction profile involving HPV18 and each 10 kb locus of chr8 were inserted in the integration site discovered by Adey et al. [16] (approximately chr8:128230000-128240000 in hg19 coordinates).Practically, this consisted in replacing, in the intra-chr8 Hi-C contact matrix, the intra-chr8 contact row (and column) of that 10 kb locus with the HPV18-chr8 contact profile.The matrix thus constructed was then normalised with the algorithm Iterative Correction of Hi-C data (ICE) [33] implemented in the function 'normICE' of the R package 'HiTC' [34].Finally, the distance-matched interaction analyses were performed in the following way: (a) we took all within-TAD3189 HPV18-chr8 normalised interaction values and the genomic distances between loci; (b) we extracted all HPV18-chr8 normalised interaction values at the same genomic distances in (a) but occurring outside TAD3189; (c) for each of the distances in (a), we randomly sampled 1000 chr8-chr8 pairs of loci and extracted their normalised interaction values.
Promoter capture Hi-C analysis
Raw sequencing data from promoter capture Hi-C in HeLa (upon Cohesin/CTCF depletion) was downloaded from GEO (accession code: GSE145736) [35].FASTQ files were processed as described in Section 2.7.Contacts between HPV18 and MYC/PVT1 were extracted by subsetting paired-end reads in which one mate fell in HPV18 and the other mate fell in the promoter regions of MYC and PVT1 (namely, in their transcription start sites AE10 kb).Fisher's Exact tests (implemented with R base function fisher.test)were performed between the number of HPV18-chr8 contacts within and outside X promoter in condition of Y control or Y depleted, with X = (MYC, PVT1) and Y = (SCC1, CTCF), for a total of four tests.The processed data of transcriptional response (SLAM-seq) in SCC1 depleted versus control was downloaded from Thiecke et al. [35].
HPV integrations are enriched in active chromatin regions and depleted in inactive chromatin regions
At first, we asked if the HPV integrations (n = 617 from 212 samples) detected from the genome-wide DNA-based approaches are distributed randomly or enriched in specific functional regions of the host genome.For this, we compared the HPV integration sites with ChromHMM annotations [22,36], which categorise the genome into broad functional annotations based on various histone modification profiles, from two closest cell lines available: HeLa (cervical adenocarcinoma) and NHEK (normal human epidermal keratinocytes); and with full-stack ChromHMM [23] representing a universal genome annotation unified from multiple cell-types.To check for the enrichment of integrations, we compared the number of observed HPV integrations overlapping with each of these annotations with the expected counts computed using random sites of equal size and similar GC content (see Section 2).
With both HeLa and NHEK ChromHMM annotations, we observed that compared to the expected, HPV integrations were significantly enriched (Chi-squared test, FDR < 0.05) in the transcriptionally active regions (TxFlnk, Tx, TxWk), enhancers (EnhG, Enh), and zinc finger protein gene/repeat regions (ZNF/Rpts); whereas a significant depletion (FDR < 0.05) was observed in polycomb repressed/heterochromatin regions (ReprPC, ReprPCWk, Het), and quiescent regions (Quies; in NHEK but not in HeLa) (Fig. 1A,B).We further merged all the ChromHMM annotations into two major categoriesactive and inactivebased on the gene-regulation/chromatin activity of the regions [22] (see Section 2).In both HeLa (Fig. 1C) and NHEK (Fig. 1D), HPV integrations were significantly enriched in active regions (one-sample Chi-squared test, P < 0.0001) and significantly depleted in inactive regions (P < 0.05) as compared to the expected counts.Similar results were obtained when we used universal genome annotation (full-stack ChromHMM, Fig. S1A, B), which provides a cell-type agnostic view of chromatin states.Together, this suggests that HPV integrations observed in the tumours are preferentially enriched in active chromatin regions.
Further, we checked specifically which histone modification marks associated with the above annotations are enriched with HPV integrations.We observed that the HPV integrations were significantly enriched (Chisquared test, FDR < 0.05) in various active histone modification regions (such as H3K4me1, H3K4me2, H3K4me3, and H3K27ac) in both HeLa and NHEK (Fig. 1E).In contrast, HPV integrations were significantly depleted (FDR < 0.05) only in the repressive histone modification regions (H3K27me3) in both HeLa and NHEK (Fig. 1E).We further extended this analysis to other cervical cancer cell lines (SiHa, Ca Ski, S12, C-33 A), for which active histone modification marks (H3K27ac) were available [37] (Fig. S1C).Despite some of these cervical cancer cell lines being HPV positive (HeLa -HPV18; SiHa, Ca Ski, S12 -HPV16) and HPV negative (C-33 A), the results obtained were consistent with the above observation that HPV integrations are enriched in active chromatin regions.Together, this suggests that the host chromatin structure influences the HPV integration patterns regardless of malignancy.
Human papillomavirus integration has previously been shown to be enriched in regions of fragile sites [4], which contain DNA repeats that could form non-B DNA conformations and are associated with genomic instability.To check this further, we computed the enrichment of HPV integrations in the DNA regions predicted to form non-B DNA conformations (see Section 2).Among all the non-B forms of DNA, only direct repeats showed significant enrichment of HPV integration as compared to the expected (Chi-squared test, FDR < 0.05) (Fig. S1D).Further, to check this in the context of chromatin states, we calculated the odds ratio of observed versus expected HPV integrations in active versus inactive regions (for HeLa and NHEK, separately).This showed that even in the non-B DNA regions, integration tends to occur more frequently in the active regions as compared to inactive regions (odds ratio > 1 and Fisher exact test, FDR < 0.05), except for A phased repeats (with both HeLa and NHEK ChromHMM) and G quadruplex regions (only with HeLa ChromHMM) (Fig. S1E,F).
Taken together, these results suggest that HPV integrations are not randomly distributed across the genome.They are highly enriched in the active chromatin regions and depleted in repressed chromatin regions.This could be due to the combined or individual effect of DNA sequence context, DNA accessibility, DNA damage response (linked to the chromatin states) and positive selection.
HPV integration affects host transcriptional activity regardless of the host chromatin state
Previous studies have shown that HPV integration can affect host transcriptional activity in its immediate vicinity [11,13]; however, how this is influenced by the host cell-type specific chromatin states has not been studied systematically yet.To check this, we used TCGA cervical cancer (CESC) samples (only WGS with matched gene expression data available, n = 50) with 151 HPV integration regions and plotted the host endogenous normalised total RNA expression from all transcript types that overlap with the 10 kb flanking regions around the integration [11] (see Section 2).We observed that the samples with HPV integration showed increased expression in their neighbouring 10 kb regions as compared to mean normalised expression from the samples without HPV integration in the same region, regardless of whether the HPV integration was in an active (Mann-Whitney U test, P = 0.0048 for HeLa and P = 7.28e-05 for NHEK) or inactive (P = 0.0065 for HeLa and P = 0.057 for NHEK) region (Fig. 2A; Fig. S2A).Following that, we asked whether the activity of host regulatory elements (like enhancers) are also enhanced around the HPV integration regions.To test this, we looked at the level of endogenous eRNA (indicative of the functional activity of enhancers [38]) transcribed from the annotated super enhancer (SE) regions, if any, within 10 kb flanking regions around the integration in TCGA-CESC samples (see Section 2).We observed an increased SE eRNA expression in the HPV integrated samples compared to the samples without integrations (Fig. 2B; Fig. S2B).Again as shown above, it was not influenced by the chromatin state of the integrated region.
We additionally noticed that the control samples (without HPV integration) showed significantly higher expression in active as compared to inactive regions (P = 0.00019 for HeLa, Fig. 2A and P = 0.00031 for NHEK, Fig. S2A), as expected.This underscores that the ChromHMM annotations from cell lines (HeLa/N-HEK) match well with the tumour tissues in terms of transcriptional activity observed in the active and inactive regions.More importantly, in samples with HPV integration, we observed a higher expression in the active as compared to the inactive regions (P = 0.055 for HeLa, Fig. 2A and P = 0.0061 for NHEK, Fig. S2A).This indicates that the HPV integration in active chromatin regions further enhances the host transcription activity in its vicinity.Further, we asked whether all HPV integrations in a sample could lead to overexpression in its immediate vicinity or not.For this, we computed the expression fold change for each HPV integration region (ratio of expression in the 10 kb flanks around integration regions with the mean expression from other samples without integration in the same region) [11].This showed that the expression association with HPV integration was highly variable and not all HPV integration regions associated with higher transcriptional activity in its vicinity (Fig. 2C; Fig. S2C).This could be due to the epigenetic suppression of certain integrated HPV regions or impaired regulatory activity [39].
Taken together, these results suggest that the HPV integration leads to transcriptional upregulation and enhanced enhancer activity in its immediate vicinity, regardless of the host chromatin state.Nevertheless, the transcriptional activity associated with HPV integration was relatively higher in active chromatin regions, as compared to inactive regions.
Transcriptional activity associated with HPV integration is mostly confined to the same TAD
We next asked whether the HPV integration can mediate or alter the host long-range chromatin interactions (such as enhancer-promoter or promoter-promoter), and thereby dysregulate gene expression in tumours.
Fig. 1.Enrichment of HPV integrations in functionally annotated regions.(A) Enrichment of HPV integrations in the ChromHMM annotated regions from the HeLa cell line.The x-axis represents the log 2 of observed/expected number of HPV integrations.The y-axis represents the different annotations and the observed number of HPV integrations overlapping them (given in the bracket).The P-values were computed using Chi-squared goodness-of-fit test followed by FDR correction.The colour of the dots indicates whether the adjusted P-value is below the significance level of 5% or not.(B) Same as (A) but with ChromHMM annotations from the NHEK cell line.(C, D) Bar plot showing the frequency of observed and expected integrations in the active and inactive regions defined by combining ChromHMM annotations in HeLa (C) and NHEK (D) (see Section 2).The P-value was calculated using a one-sample Chi-squared test.(E) Enrichment of HPV integration in various histone modification regions from HeLa and NHEK.The x-axis represents the log 2 of observed/expected number of HPV integrations and the y-axis represents the negative log 10 of adjusted P-value (Chi-squared test followed by FDR correction).The horizontal dashed line represents FDR cut-off of 5%.The colour and size of the dots represent the cell line and the number of observed HPV integrations for each of the histone marks, respectively.
To test this, we compared the host gene expression at the level of TADs, obtained from HeLa and NHEK cell lines [25].TADs act as functional units of genome organisation by restricting the interactions between regulatory elements and thereby controlling gene regulation [25,40].TAD boundaries are commonly bound by insulator proteins like CTCF that prevent interactions across the TADs.We hypothesised that the transcriptional overexpression associated with the HPV integration would be majorly restricted to the TADs where the HPV integration is localised.To test this, we plotted the average gene expression at the TAD level in the TCGA-CESC samples with HPV integration and compared it with the mean expression from samples without integration in the same TAD (see Section 2).We further extended this analysis to the immediate upstream (5 0 ) and downstream (3 0 ) TADs for comparison.Only the TADs with HPV integration showed overall increased expression compared to the samples without integration (Wilcoxon signed-rank test, P < 0.0001) (Fig. 3A; Fig. S3A).However, neither the upstream nor the downstream neighbouring TADs showed any effect at TAD level expression of genes (P > 0.01) (Fig. 3A; Fig. S3A).
Further, we asked whether increase in the number of integrations in a TAD would result in more perturbations in gene regulation and overexpression.For this, we separated TADs based on whether they had 1, 2, or more than 2 integrations and compared the TAD level gene expression among them, along with the samples without integration.This showed that indeed an increase in the number of integrations in a TAD was associated with an increase in gene overexpression as compared to the samples without integration in the same TADs (Fig. 3B; Fig. S3B).
Direct HPV integration in the host genes (or fusions) can lead to overexpression of the target genes.To remove the influence of these, we repeated the above analysis after removing the genes within the 10 kb regions flanking the integration sites (Fig. S3C,D).Nevertheless, we found that the gene expression was higher in the TADs with integration as compared to samples without integration, albeit lower than the above (Fig. 3A; Fig. S3A), suggesting the long-range cisregulatory potential of the HPV integration.
Taken together, these results indicate the potential of HPV integration to enhance expression of nearby host genes but mostly within the same TAD, likely due to the constraint imposed by the TAD boundaries or genome organisation.Further, within the TAD, the overexpression observed could come from both the direct and long-range effect of HPV integration on target genes through chromatin contacts in 3D nuclear space.
Recurrent HPV integrations in TADs are associated with oncogene overexpression
Recurrent HPV integrations near cancer-related genes (such as MYC and TP63) have been previously reported in cervical cancers [4,5].In those studies, the recurrence was defined mostly based on the HPV integration directly overlapping with or in close proximity (at a defined distance cut-off) of the target genes.This can potentially miss out the integrations that are far away and can have a long-range effect on the target genes.To overcome this, we performed recurrent integration analysis at the TAD level.For this analysis, we considered HPV integrations (n = 1324 from 326 samples) collated from both DNA-based studies (WGS, WXS and Hybrid capture) and RNA-seq (see Section 2).The TADs which had HPV integration(s) in at least three samples were considered recurrent (Fig. 4A).This resulted in eight recurrent TADs having integrations from 62 samples.TAD3189 (with 22 samples) and TAD2252 (with 15 samples) were the top two most recurrently integrated TADs.Interestingly, these TADs also exhibited mutually exclusive patterns of HPV integration, indicating that these recurrent TADs were not affected in the same patients.Moreover, we found that certain HPV subtypes were frequently integrated in these TADs (Fig. 4A).HPV16 was predominantly observed in TAD3189 (13/22), TAD2252 (11/15), TAD1333 (5/5) and TAD2320 (3/4), whereas HPV18 was predominant in TAD1369 (3/6).
In the TAD3189, we observed both HPV16 and HPV18; however, the proportion of HPV18 was much higher (41%) in TAD3189 as compared to the overall HPV18 positive samples (16%) in TCGA-CESC cohort (Chi-squared test, P = 0.011).This is likely due to the preferential integration sites for different HPV strains [41].Analysis of an independent dataset of small cell cervical carcinoma [21] also revealed TAD3189 to have the most recurrent HPV integration (with HPV18) among other TADs (Fig. S4A).
To further understand if the recurrent HPV integrations are associated with tumorigenesis, we looked at the presence of cancer-related genes (from Cancer Gene Census [42] and Cancer LncRNA Census [43]) in these TADs.Interestingly, TAD3189 harbours multiple important coding (MYC) and non-coding (PVT1 and CCAT1) oncogenes.Overexpression of these oncogenes have been previously reported in cervical cancers and were also associated with poor prognosis [44][45][46].Thus, to test if HPV integration in TAD3189 is affecting the expression of these oncogenes, we divided the TCGA-CESC samples into two groups, one which had integration in TAD3189 and other which did not.Expression of both MYC (Wilcoxon rank-sum test, P = 0.0006) (Fig. 4B) and PVT1 (P = 0.010) (Fig. 4C) was significantly higher in the samples with integration in TAD3189 as compared to samples without integration in TAD3189.Similarly in TAD2252, TP63 (oncogene) expression was higher (P = 0.057) (Fig. 4D) in samples with integration in TAD2252.In TAD1333, ERBB2 (oncogene) expression was significantly higher (P = 0.0007) in samples with integration, whereas CDK12 (tumour suppressor gene) in the same TAD did not show any change in expression (P = 0.27) (Fig. 4E,F).Also, RAD51B (tumour suppressor gene) in TAD996 did not show any change in expression level (P = 0.52) (Fig. 4G).This suggests that the HPV integration preferentially enhances the expression of oncogenes in these recurrent TADs.Interestingly, in these TADs (TAD3189 and TAD2252) with oncogenes, increased gene expression (z-score > 0; above the average expression level across samples) was evident regardless of whether the HPV integration occurred directly at the gene or further away in the respective TADs (Fig. S5), suggesting that the HPV integration can affect gene expression both locally and at longer distances.Further, we checked the effect of these recurrent TADs with integration on the host gene expression at the pathway level (using MSigDB Fig. 4. Recurrently integrated TADs and expression alteration of associated cancer genes.(A) Heatmap shows the HeLa TADs with recurrent HPV integrations.The x-axis represents the sample-id and y-axis represents the TADs (denoted with distinct numbers to differentiate each TAD domain).Blue box indicates HPV integration(s) in a particular TAD, and in a particular sample.The top two rows represent the tumour histology and HPV subtype in each of the samples, respectively.(B, C) Gene expression of MYC (B) and PVT1 (C) in TCGA-CESC samples with (n = 11) and without (n = 151) HPV integration in TAD3189.(D) Gene expression of TP63 in TCGA-CESC samples with (n = 7) and without (n = 155) HPV integration in TAD2252.(E, F) Gene expression of ERBB2 (E) and CDK12 (F) in TCGA-CESC samples with (n = 4) and without (n = 158) HPV integration in TAD1333.(G) Gene expression of RAD51B in TCGA-CESC samples with (n = 6) and without (n = 156) HPV integration in TAD996.The P-values shown in panels (B-G) were calculated using the Wilcoxon ranksum test (two-sided).The colour of each dot in the box plot (B-G) represents the relative copy number status of the gene in the respective TCGA-CESC samples (À2 deep deletion, À1 deletion, 0 copy neutral, 1 amplification, 2 high amplification).In each boxplot, the horizontal middle line indicates the median, the height of the shaded box indicates the interquartile range (IQR) and the whiskers indicate 1.5 9 IQR.hallmark gene sets), and it showed that samples with integration in TAD3189 (with MYC) have upregulation of MYC target genes, whereas TAD2252 (with TP63) have upregulation of interferon (alpha and gamma) signalling, as compared to samples without integration (Fig. S4B,C).This indicates that the upregulation of oncogenes observed within these TADs indeed causes downstream transcriptional changes at the pathway level and also exhibits different pathological responses.
HPV18-induced chromatin interactions in HeLa are confined locally and target oncogenes
Next, we asked whether the long-distance effect of HPV integration on the oncogene expression in the above recurrent TADs could be mediated by chromatin looping.To study this, we chose the HeLa cell line (as a model) which has HPV18 integration in TAD3189 (chromosome 8), the most frequently HPV integrated TAD in cervical cancers (Fig. 4A).To Fig. 5. Hi-C analysis reveals highly localised chromatin interactions between integrated HPV18 DNA and host genome in HeLa.(A) Scatterplot shows the interaction frequency per kb between TADs on chromosome 8 and the HPV18 genome.All the TADs are arranged in a linear manner (5 0 to 3 0 direction of the genome).TAD3189 contains the HPV integration (red dot) which showed highest interaction as compared to the adjacent TADs (and also at the chromosome level, Fig. S6B).(B) Boxplot shows the interaction frequency between HPV18 and TAD3189 regions (n = 74), distance-matched interactions between HPV18 and regions outside of TAD3189 (n = 74), and between genomic regions within chromosome 8 (n = 1000, random sampled).The P-values were computed using Mann-Whitney U test (two-sided).In the boxplot, the horizontal middle line indicates the median, the height of the shaded box indicates the interquartile range (IQR) and the whiskers indicate 1.5 9 IQR.(C) Histogram distribution plot shows the interaction frequency of the bins which overlap the promoters of MYC and PVT1 with all the interacting bins on chromosome 8 and the integrated HPV18 (orange line).Only those interacting bins which were supported by more than 5 reads were shown in the plot.Empirical P-value indicates the likelihood of finding interaction frequency greater than the observed between HPV18-gene promoters in the distance-matched random sampling (n = 1000) of chromatin interactions between genomic regions within chromosome 8.
obtain an unbiased and global overview of all the genomic interactions between the integrated HPV DNA and the host genome, we leveraged the available Hi-C data from HeLa [47], which captures overall genomic interaction at a particular resolution (or regions that are in close proximity in 3D space).For the Hi-C data analysis, we constructed a hybrid human-HPV18 genome, and mapped reads to this to compute the contact frequency (see Section 2).First, we looked at the chromosome level to identify which of the chromosomes have contacts with the integrated HPV18 DNA.This showed that the majority of the contacts involving HPV18 were associated with chromosome 8 (chromosome with HPV18 integration), followed by intra-HPV18 contacts (Fig. S6A).Second, to gain further insights into these interactions within chromosome 8, we plotted the normalised contact frequency (see Section 2) between the HPV18 DNA and host genome for all the TADs on chromosome 8.This showed that the highest interaction frequency with the integrated HPV18 DNA were observed with genomic regions in TAD3189, where the HPV integration localised (Fig. 5A; Fig. S6B,C).Further, to check if this is expected due to the proximity of HPV integration in the genome, we compared the normalised interaction frequency between HPV18 and TAD3189 with distance-matched randomly sampled interactions from genomic regions within chromosome 8.This showed that the HPV18-TAD3189 interactions were significantly higher (Mann-Whitney U test, P = 1.37e-08) as compared to random interactions of similar distance (Fig. 5B).Further, to check if the TAD boundary constrains the interaction of HPV18 to be mostly within the TAD3189, we compared the normalised interaction frequency between HPV18 and TAD3189 with the interactions between HPV18 and regions outside of TAD3189 but within chromosome 8 at similar distances.Again, this showed that the HPV18-TAD3189 interactions were significantly higher (Mann-Whitney U test, P = 9.54e-22), suggesting that the TAD boundaries limit the interaction of HPV18 DNA within TAD3189 (Fig. 5B).Together, these results demonstrate that the chromatin interactions involving the integrated HPV18 are mostly confined to the same TAD containing the HPV18 integration, thus possibly supporting the TAD level upregulation of genes observed in tumours (see Fig. 3A; Fig. S3A).
We wanted to further understand the specific host regions within TAD3189 which looped with the integrated HPV18 with a high frequency.TAD3189 has two oncogenes MYC and PVT1 which are ~500 kb far away from the integration.We focused on the promoter region of these two oncogenes and observed that their interaction with HPV18 (among other regions on chromosome 8) was in the top 1 percentile for both of them (Fig. 5C).Also, the normalised interaction frequency was significantly higher between HPV18 and promoters of MYC (P = 0.001) and PVT1 (P = 0.006) as compared to the distance-matched randomly sampled interactions within chromosome 8.This indicates a potentially strong chromatin interaction between these oncogenes promoter with the integrated HPV18 DNA in HeLa.
4C-seq reveals haplotype-specific chromatin looping between integrated HPV18 and host genome
To further characterise the chromatin looping interaction mediated by the HPV18 integration at the local scale with higher resolution, we performed a 4C-seq experiment with the integrated HPV18 DNA as an anchor point in HeLa.At first, we wanted to check the extent of HPV integration induced chromatin interactions at the TAD level.For this, we plotted per bp coverage of 4C-seq reads in TAD3189 (with HPV integration) and in the neighbouring TADs (4 upstream and 4 downstream).Most of the 4C-seq signal was observed from the TAD with HPV integration similar to Hi-C analysis (Fig. S7A,B).Even though the HPV integration was found near the left boundary of TAD3189, the coverage of reads in the immediate upstream TAD was quite low.This suggests the role of chromatin structure (TAD boundaries) in confining the regulatory effects of the integrated HPV DNA within the same TAD predominantly.Further within the TAD3189, higher level of interaction was observed between the integrated HPV18 and all the three oncogenes (CCAT1, MYC and PVT1) in that TAD (Fig. 6A, 4C track).This further supports the previous studies which showed an interaction between HPV18 and MYC locus using ChIA-PET [16] and 3C analysis [15].
In HeLa, integration of HPV18 is observed in only one of the two haplotypes of chromosome 8.Based on this, we expected that the chromatin interactions observed between the integrated HPV18 DNA and the host genome could be haplotype-specific.To check this, we performed haplotype-specific 4C-seq coverage analysis, using heterozygous SNPs (as marker positions) from the HeLa genome (see Section 2).This revealed that almost all reads from the 4C-seq (~99-100%) mapped to the allele from the Haplotype A (which has the HPV integration) (Fig. S7C,D).This suggests that the HPV integrationmediated chromatin interactions are not only localised mostly within the same TAD but are also haplotype specific.
(A) (B) (C) (D)
Gene level TAD level Super enhancer level TAD3189: TAD with HPV18 integration in chr8 of HeLa 3.7.Haplotype-specific cis-regulating activity of HPV integration is associated with allele-specific oncogenes overexpression Further, we asked if the above haplotype-specific chromatin looping interaction could lead to haplotypespecific regulatory changes and gene expression alterations.To check this, we performed haplotype-specific analysis for RNA polymerase II (Pol2) binding on chromosome 8 in HeLa.This resulted in three peaks of Pol2 in and around the TAD3189 which showed significant haplotype-specific binding to Haplotype A (> 98% reads from Haplotype A in all 3), as compared to the expected proportion based on DNA copy number.Interestingly, these peaks overlapped with the CCAT1 and MYC genes and also the super-enhancer overlapping the HPV integration (Fig. 6A, Pol2 peaks (HapA) track), suggesting that the integrated HPV DNA drives gene regulation in haplotype specific manner.Further, we asked if this results in preferential expression of genes from the haplotype with HPV integration.To test this, we performed allele-specific gene expression analysis (taking into account the DNA copy number) on chromosome 8 in HeLa.This revealed MYC having significantly higher expression from Haplotype A (Fig. 6B) (see Section 2).The other oncogenic lncRNAs in the TAD3189, CCAT1 and PVT1, also showed a similar pattern (Fig. 6B).Further, we performed allele-specific gene expression analysis at the TAD level, combining all the genes in a TAD together.It also revealed TAD3189 to have the significantly higher expression from haplotype A among all the TADs on chromosome 8 (Fig. 6C).These results are in line with the TAD specific chromatin interactions (from Hi-C and 4C-seq) reported above (Fig. 5B; Fig. S7A).Furthermore, to test if the chromatin interactions are essential for the overexpression of MYC/PVT1, we utilised the Promoter Capture Hi-C data generated from HeLa with depletion of SCC1 (RAD21, a subunit of Cohesin) and CTCF, separately, by using auxin-inducible degron system [35].Given that both Cohesin and CTCF are important for chromatin looping, we asked whether the chromatin interactions between HPV18 and promoters (TSS AE10 kb) of MYC/PVT1 change upon Cohesin or CTCF depletion.This showed that SCC1 (Cohesin) depletion resulted in a significant decrease in chromatin interactions between HPV18 DNA and the promoter regions of MYC (Fisher's exact test, P-value = 0.004, OR = 3.8) and PVT1 (P-value = 0.004, OR = 6.4) (Fig. S8A), whereas with CTCF depletion no significant changes were observed (P-value = 0.9, OR = 0.93/Pvalue = 0.9, OR = 1.1, respectively) (Fig. S8B).Furthermore, the transcriptional response (captured using SLAM-seq) in the SCC1 depletion condition (versus control) revealed a significant down-regulation of MYC expression (log 2 fold change = À1.78 and FDR adjusted P-value = 3.07 9 10 À21 ) (Fig. S8C), suggesting that the chromatin interaction mediated by cohesin is directly influencing the MYC overexpression.However, PVT1 did not show any change in the expression, likely due to the low expression of lncRNA (as compared to mRNAs) and the sensitivity of SLAM-seq to capture it in a short time interval.
A recent study has reported that HPV integrations are enriched in super-enhancers (SE) which ultimately control lineage determining genes [4].Hence, we asked whether the HPV18 integration in HeLa affects the SE activity on chromosome 8.TAD3189 also harbours multiple SEs, one of which overlaps HPV integration.Interestingly, this SE is the most strongest in terms of signal (enhancer marks) on chromosome 8, and the third most strongest across the genome in HeLa (SEdb [24]).Allele-specific expression analysis using GRO-seq data revealed this SE in TAD3189 to be the most significantly and highly expressed from Haplotype A among all the SEs on chromosome 8 (Fig. 6D).Taken together, these results indicate that the HPV integration in chromosome 8 lead to changes in regulatory activity and chromatin interactions, resulting in allele-Fig.6. HPV18 integration mediated chromatin interactions from 4C-seq and allele-specific expression of oncogenes and SE in HeLa.(A) Regions in the TAD3189 which show high interactions with the integrated HPV18 DNA by 4C-seq are shown in the bottom line plot (Cis analysis).The HPV integration region, MYC and PVT1 is marked by an arrow.Various histone modifications, Pol2 and CTCF binding tracks from HeLa are also overlaid.Pol2 peaks (HapA) track shows the Pol2 peaks which were significantly haplotype specifically binding to Haplotype A. (B-D) Allele-specific expression analysis for (B) individual genes, (C) TADs, and (D) super-enhancers on chromosome 8.Values on the x-axis represent the log 2 ratio calculated as the fraction of RNA reads coming from haplotype A out of the total RNA reads divided by the fraction of DNA reads coming from haplotype A out of the total DNA reads (sum over all the heterozygous SNPs in a particular feature).The y-axis represents the negative log 10 of Bonferroni corrected P values, after a binomial test of RNA read counts from Haplotype A against the proportion of DNA read counts from Haplotype A, for each of the features.(E) Model summarising HPV integration mediated changes in host chromatin structure and gene expression dysregulation.HPV integration can lead to local chromatin changes resulting in transcriptional upregulation of host genes (also in fusion with viral genes) in its vicinity.In addition, the integrated HPV DNA can mediate long-range chromatin interactions resulting in the upregulation of genes at a distance.specific expression of SE and oncogenes within the same TAD (TAD3189).
Discussion
Human papillomavirus DNA integration into the host nuclear DNA is often found in the tumours of advanced cervical cancers, as compared to their early stages where the HPV is mostly present in the episomal form [3,39,48].Thus, the HPV DNA integration is considered as an important oncogenic event in the transformation and progression of cervical cancer, however, the molecular mechanism underlying this is not yet fully understood.On one hand, it can be attributed to the overexpression of viral oncogenes (E6 and E7) from the integrated HPV DNA which could affect the cellular functions (such as cell proliferation and immune response) [7,49].On the other hand, the integrated HPV DNA itself can affect expression of nearby host genes in cis and thereby contribute to tumour development [13,50].However, the extent of the latter at the genome-wide level, particularly in the context of host chromatin structure, is not well understood.Thus, in this study, we attempted to investigate the impact of HPV integration on host chromatin structure and gene regulation genome-wide, by using the HPV integration from cervical tumour samples combined with the chromatin structure information from closest matching cancer and normal cell lines.
First, we showed that the distribution of HPV integrations across the genome is non-random.They are significantly enriched in the active chromatin regions and depleted in the inactive chromatin regions.Though these results are consistent with the previous meta-analysis studies [17,18,51], our analysis tried to remove any bias emerging from the integration detection methods (such as RNA-seq and APOT) by considering only whole-genome DNA-based assays.Additionally, for the enrichment analysis, random integration sites were generated by matching the GC content around the observed HPV integration sites to account for the influence of local sequence content (since the HPV integrations are associated with microhomology-mediated processes [5]).This approach is robust as compared to the previous studies which used uniform random distributions.Still the significant enrichment of HPV integration observed at the active chromatin regions in the cervical tumours could be explained by multiple factors acting prior to and/or during the clonal selection.Prior events may include the closer proximity of the episomal HPV DNA to the active chromatin regions due to the interaction between viral E2 protein and host BRD4 chromatin proteins (likely for the utilisation of host transcription/replication factors for viral transcription and replication), DNA breaks at the transcriptionally active regions due to replication-transcription conflicts or torsional stress, DNA repair and microhomology between the HPV and host DNA [5,39,52].All these factors can favour the accidental integration of HPV DNA into the host genome.After that, if the host locus of the HPV integration is favourable for the expression of viral oncogenes (E6/E7) and also for upregulation of nearby host genes (especially oncogenes discussed below) that provides the selective growth advantage and favours the clonal expansion, then these integrations are likely to undergo positive selection.
Second, we observed that the transcriptional upregulation in the flanking region (up to 10 kb) of HPV integration was evident in both active and inactive chromatin regions.This could be due to the local genomic alterations (amplifications/translocations) associated with the HPV integration, and also changes in the local chromatin environment.For example, the host transcription factors can bind to the integrated HPV DNA and thereby increase the local chromatin accessibility [53][54][55] and subsequently upregulate the expression of nearby host genes.Alternatively, this can be due to the fusion of nearby host genes with the viral genes [11].Also, it is possible that the HPV integration at host regulatory regions (such as enhancers) could lead to the formation of super-enhancers [56,57] and thereby enhance the expression of host genes.Our findings expand this observation in cervical cancers as we observed a higher expression of SE eRNAs in the immediate vicinity of the HPV integration regions, regardless of the host chromatin state.
Third, we explored the long-range effect of HPV integration on expression of the host genes.For this, instead of defining arbitrary length cut-offs around the integration sites [4,11], we used TAD boundaries as demarcation points.Overall, we found that the TADs with HPV integration showed higher gene expression as compared to samples without integration in the same TADs.And this upregulation is positively correlated with the number of HPV integrations within the TAD.However, genes in the neighbouring TADs were mostly unaffected.This may be due to the fact that the host chromatin structure is influencing the HPVassociated genomic alterations (amplification/translocation) during the integration processes or limiting the HPV integration mediated chromatin interaction changes to intra-TAD because of the insulation property of TAD boundaries.Recent studies have shown that HPV integration can cause changes in the local TAD structures in advanced cervical cancers [58] and also in human cell lines: HPV16 integration in W12 (prior to clonal selection) [14] and HPV16 integration in SiHA cell line [59].Our results further extend this observation genome-wide in cervical tumours and show that the majority of the HPV integration induced chromatin changes and associated gene expression changes are mostly confined to the same TAD.
We found few loci with recurrent HPV integration at the TAD level and were associated with overexpression of oncogenes within them.Further, looking at the distribution of HPV integration within these TADs revealed that the expression of oncogenes (such as MYC and PVT1) were not only affected by HPV integration directly at or closer to these genes, but also by integration farther away (~500 kb) but within the same TAD.To further understand the mode of regulation of these oncogenes by HPV integration, we chose HeLa as the model system as it had the integration in the most recurrently integrated TAD3189 in cervical tumours.Previous studies have shown chromatin interaction between the integrated HPV DNA and the MYC gene (~500 kb away from the integration) by using ChIA-PET and 3C assays in HeLa [15,16].However, whether this interaction is localised and specific to MYC regions or the integrated HPV DNA can interact with other genomic regions is not known.Our unbiased analysis, by using available Hi-C data from HeLa, revealed that the majority of the chromatin interactions were localised within the same chromosome, specifically within the same TAD as integration (Fig. 5A; Fig. S6A).Further, 4C-seq analysis by taking integrated HPV DNA as a viewpoint, showed haplotype-specific chromatin interaction between integrated HPV DNA and host genomic regions in HeLa.This is further supported by the allele-specific RNA pol II binding enrichment, SE activity and overexpression of MYC and PVT1 genes within the TAD3189.This allele-specific activity can be extrapolated to the cervical tumour samples as well, because TAD3189 is the most recurrently integrated TAD among cervical tumours (Fig. 4A), and we also observed the overexpression of oncogenes MYC and PVT1 within them (Fig. 4B,C).We propose that this may be one of the many ways by which HPV integration influences the process of tumorigenesis, as the role of both of these oncogenes along with the CCAT1 in cervical carcinogenesis have already been established [44][45][46].In line with this, a recent study has shown that the MYC overexpression in other tumour types could be driven by the somatic structural variant mediated changes in long-range chromatin interactions [60].Similar to cervical cancers, the HPV integrations in head and neck squamous cell carcinoma also showed oncogenic transcriptional upregulation near the integrated sites and epigenetic changes in the host genome regions interacting with integrated HPV DNA [57].
However, the limitations of this study include: (a) The HPV integrations we analysed were mostly detected from the advanced stage cervical tumours, thus their genome-wide distribution with respect to the chromatin states and dysregulation of cancer genes expression observed here could be influenced by the factors acting prior and during the clonal selection.Future studies which test for de novo HPV integration (for example, by using cell line either infected with HPV or genome-wide profiling of early-stage cervical tumours) might shed light on the interplay between viral and host factors which drives the integration processes and associated chromatin changes; (b) the overexpression of host regions observed near the HPV integration in active and inactive regions could have contributions from the extrachromosomal DNA (ecDNA), which have a hybrid of viral-host genome [11,37].Perhaps in future, the application of long-read DNA/RNA sequencing could help to disentangle the DNA structural conformations of HPV integrations and better quantify the contributions from the ecDNA; (c) the TADs from cell lines HeLa and NHEK covers only 47.5% and 56% of the genome respectively, and so we were limited to analyse those HPV integration that fall within that TAD region.Chromatin interaction maps from the matched tumour samples could help to better understand the long-range effect of HPV integrations genome-wide and also to study the changes of the TAD structure (whether the integration lead to split of existing TAD or merging of adjacent TADs).
Conclusions
To conclude, this study reveals the cis-regulatory potential of HPV integration on the host gene dysregulation through changes in the host chromatin structure and gene-regulatory interactions (Fig. 6E).On the basis of our results and previous findings we propose that HPV integration is a strong driver which mediates multiple modes of dysregulation (including overexpression of E6/E7 viral oncogene, cis-regulatory effect of HPV integrations, local copy number changes, and amplification of regulatory elements) that can affect the host cellular functioning and thereby providing selective growth advantages to the cancer cells.This study also demonstrates that TADs can be used to identify host genes at a distance that are likely to be affected by the HPV integrations through looping or chromatin contacts, instead of arbitrary distance cut-offs.This will help to identify more recurrent integrations at the TAD level and also to associate orphan HPV integrations with new target genes.Moreover, our findings reveal the significance of insulated neighbourhoods in the form of TADs and their key role in safeguarding the genome from spurious transcriptional changes driven by viral integrations.cell line.Henrietta Lacks, and the HeLa cell line that was established from her tumour cells without her knowledge or consent in 1951, have made significant contributions to scientific progress and advances in human health.We are grateful to Henrietta Lacks, now deceased, and to her surviving family members for their contributions to biomedical research.This study was reviewed by the NIH HeLa Genome Data Access Working Group.The genomic datasets used for analysis described in this manuscript were obtained from the database of Genotypes and Phenotypes (dbGaP) through dbGaP accession number phs000640.v1.p1.This work was supported by the DBT/Wellcome Trust India Alliance Fellowship [grant number IA/I/20/1/504928] awarded to RS.We also acknowledge support of the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4006 and intramural funds from NCBS-TIFR.
Fig. 2 .
Fig.2.Enhanced transcriptional activity near HPV integration in the context of chromatin states from HeLa.(A) Boxplot showing the total expression in the 10 kb flanking region around the HPV integration regions as compared to mean expression from TCGA-CESC samples without HPV integration in the same region.In the boxplot, the horizontal middle line indicates the median, the height of the shaded box indicates the interquartile range (IQR) and the whiskers indicate 1.5 9 IQR.The x-axis represents whether the HPV integration is located in an inactive (n = 97) or active (n = 54) chromatin regions with respect to HeLa ChromHMM.The P-values were computed using Mann-Whitney U test (two-sided).(B) Same as (A) but for the eRNA expression from SEs (if any) within 10 kb on either side of the HPV integration regions located in inactive (n = 35) or active (n = 43) chromatin regions.(C) Expression fold change associated with each of the HPV integration regions.The x-axis represents the log 2 fold change, which was calculated as the total expression in the 10 kb flanking region around HPV integration regions divided by the mean expression from other samples without HPV integration in the same genomic region.The y-axis represents the individual sample-id of TCGA-CESC samples.The colour of the dots indicates if the integration overlaps an active or inactive ChromHMM region of HeLa.The black vertical line represents the value of log 2 (fc + 1) = 1.The histogram at the bottom shows the frequency of integration regions at different fold-change bins.Each dot in (A-C) represents a HPV integration region from a sample.
Fig. 3 .
Fig.3.HPV integration associated host gene overexpression with respect to HeLa TAD domains.(A) TAD level gene expression in the TCGA-CESC samples with HPV integration compared to the mean expression from the samples without HPV integration in the same TADs (n = 169), also for the neighbouring upstream (5 0 , n = 147) and downstream (3 0 , n = 156) TADs.(B) TAD level gene expression in the TCGA-CESC samples with HPV integration, separated by whether the TAD had one (n = 69), two (n = 44) or more than two (n = 56) integrations compared to the mean expression from the samples without HPV integration in the same TADs.The TAD information was obtained from the HeLa cell line for (A, B).Each dot in (A) and (B) represents a unique HeLa TAD-tumour sample combination.In each boxplot, the horizontal middle line indicates the median, the height of the shaded box indicates the interquartile range (IQR) and the whiskers indicate 1.5 9 IQR.The P-values shown at the top were computed using the Wilcoxon signed-rank test (two-sided).
genes Local chromatin changes log2 ratio of allele-specific RNA expression from Haplotype A (with DNA copy-number correction) reads from Haplotype A (w/o DNA copy-number correction) 18 (2024) 1189-1208 ª 2023 The Authors.Molecular Oncology published by John Wiley & Sons Ltd on behalf of Federation of European Biochemical Societies. | 2022-12-01T14:12:10.226Z | 2022-11-28T00:00:00.000 | {
"year": 2023,
"sha1": "4d3467835bbaaeef41074833d4f8a4acfbb25db1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1878-0261.13559",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fa7f9f6ee8c69ee9f2857dbe05bcf6d1ca9429b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
207158256 | pes2o/s2orc | v3-fos-license | Superconductivity with topological surface state in SrxBi2Se3
By intercalation of alkaline-earth metal Sr in Bi2Se3, superconductivity with large shielding volume fraction (~91.5% at 0.5 K) has been achieved in Sr0.065Bi2Se3. The analysis of the Shubnikov-de Hass oscillations confirms the 1/2-shift expected from a Dirac spectrum, giving transport evidence of the existence of surface states. Importantly, the SrxBi2Se3superconductor is stable under air, making the SrxBi2Se3 compound an ideal material base for investigating topological superconductivity.
challenge.Here we report that by intercalation of alkaline-earth metal Sr in Bi 2 Se 3 , superconductivity has been achieved in Sr 0.06 Bi 2 Se 3 .The large shielding fraction (∼88% at 0.5 K) indicates the occurrence of bulk superconductivity.This provides a new material base for investigating topological superconductivity.
The theoretical predication and the successful experimental realizations of topological insulators have opened an exciting research topic in condensed matter physics [1][2][3][4] .Topological insulators are materials with a bulk electronic band gap and gapless, delocalized, metallic surface states.This surface state is formed by topological effects that render the electrons traveling on such surfaces insensitive to scattering by impurities.Such a unique topological electronic structure may provide new routes to generate novel phases and particles, possibly achieving some of the most desirable traits for computing components and next-generation spintronics technologies.Recently, it has been predicted that in the interface of a topological insulator and a superconductor 5 , the long-sought yet elusive Majorana fermion will arise.Other interesting phenomena such as the fractional Josephson effect have also been predicted to exist in hybrid superconductor-topological insulator structures.
The fabrication of bulk topological superconductors is an exciting issue in topological material science.However, the experimental realizations of topological superconductors have been greatly limited.So far, possible topological superconductivity has been claimed in Cu-intercalated Bi 2 Se 3 (Cu x Bi 2 Se 3 ), highly-pressurized Bi 2 Te 3 and Sb 2 Te 3 , and Bi 2 Te 3 thin films grown in superconducting substrates, etc. [6][7][8][9] .In particular, the discovery of superconductivity in Cu x Bi 2 Se 3 is notable and tremendous experimental and theoretical efforts have been placed in this material since it may represent the first bulk topological superconducting candidate [10][11][12][13][14][15][16][17] .It is now in great demand to identify whether or not the emergence of the superconducting order is associated with the occurrence of a nontrivial topological invariant in such compounds.Some experimental data have given positive evidence on the topological behaviors in Cu x Bi 2 Se 3 .For example, the point-contact spectroscopy measurements have clearly shown the zero-bias conductance peaks from the Majorana bound states at the surface edges 13 .And the measurements of the bulk and surface electron dynamics in Cu x Bi 2 Se 3 suggest that the electron dynamics in superconducting Bi 2 Se 3 are suitable for trapping non-Abelian Majorana fermions 11 .On the other hand, there are experimental data on the Cu x Bi 2 Se 3 compounds which give conflicting evidence.In particular, the scanning tunneling spectroscopy data reveal a fully-gapped feature in the density of states and there is no in-gap state, possibly suggesting that the superconducting state in the Cu x Bi 2 Se 3 samples is topologically trivial 17 .The divergence in experimental data on the Cu x Bi 2 Se 3 compounds is mainly due to the relatively low superconducting volume fraction (∼40%) of the sample.At present, it is a challenging task to fabricate more topological superconducting candidates which exhibit high superconducting volume fraction.Here we show that intercalation of Sr in the well-known topological insulator Bi 2 Se 3 could lead to a superconducting state below ∼2.5 K.The bulk superconductivity has been confirmed by the large shielding volume fraction (88% at 0.5 K).The quantum oscillations data measured with the magnetic field both parallel and perpendicular to the surface of the sample reveal a ellipsoidal Fermi surface shape and a nontrivial Berry's phase, providing possible evidence for the existence of surface state.Thus the Sr 0.06 Bi 2 Se 3 compounds could be a possible candidate of topological superconductor.
Figure 1a gives the x-ray powder diffraction (XRD) pattern of the Sr x Bi 2 Se 3 sample (the nominal x is 0.15).In order to see the influence of Sr intercalation on the lattice structure, we plot the XRD pattern of the Bi 2 Se 3 sample as the reference.For all diffraction peaks, they display slight shift to lower angle, suggesting the slight enlargement of both the aand c-axis lattice con-stants.Detailed refinements on the XRD patterns suggest that the lattice parameters of Sr x Bi 2 Se 3 are a=4.1369Å and c=28.598Å, which are larger than those of a=4.1328Å and c=28.573Å in undoped Bi 2 Se 3 .We also perform an energy dispersive x-ray spectroscopy analysis on the Sr x Bi 2 Se 3 sample (shown in Fig. 1b).It gives the chemical formula of Sr 0.06 Bi 2 Se 3 (Table S1). Figure 1c Figure 2a shows the temperature dependence of in-plane resistivity (ρ xx ∼T curve) of an as-grown Sr 0.06 Bi 2 Se 3 sample.The resistivity exhibits metallic-like behavior at high temperature.
The onset of superconducting transition occurs at T c ∼2.57K and zero resistivity is achieved at T zero ∼2.39 K.This extremely narrow superconducting transition width (∆T c <0.2 K) suggests the high quality of the sample.Below ∼50 K, the ρ xx ∼T curve is very flat, giving the residual resistivity ρ xx0 =0.24 mΩ cm.The Hall resistivity ρ xy is almost proportional to the applied magnetic field B (Fig. 2b), suggesting the dominance of only one type of bulk carrier.The negative slope of ρ xy ∼B curve means that the dominant charge carriers in Sr 0.06 Bi 2 Se 3 are electrons.The Hall coefficient R H is found to be only weakly temperature dependent and the carrier concentration is estimated to be n e ∼2.2×10 19 cm −3 (Fig. 2c). Figure 2d gives the temperature dependence of magnetic susceptibility measured with B ab.In the B ab case, we ignore the effect of demagnetization factor since the dimensions of the sample satisfy a∼b≫c.The diamagnetic signal appears below 2.4 K.The shielding volume fraction at 0.5 K can be as high as 88%, which is much larger than that of about 43% in Cu x Bi 2 Se 3 7 .And the Meissner volume fraction is about 16.5% at 0. 18,19 .It should also be mentioned that the background of the SdH oscillations varies significantly with the field direction, indicating that the transverse magnetoresistance is very anisotropic.Figure 3b shows the SdH oscillations after subtracting the background for the B⊥ab direction.The very simple pattern seen in Fig. 3b is a result of the single frequency F=132.8T (see inset for the Fourier transform) governing the SdH oscillations.The same analysis is applied to the data for the B ab direction (Fig. 3c).
The frequency is F=233.and k z F =1.12 nm −1 .For a closed Fermi pocket, the bulk carrier concentration (n) is given by The estimated charge carrier concentration is 1.6×10 superconductor is larger than that of the Bi 2 Se 3 parent topological insulator, suggesting that the intercalation of Sr introduces charge carriers into the electronic state.
We analyzed the SdH oscillations by plotting the Landau index versus the inverse of the magnetic field (1/B).In general, any closed cyclotron orbit is quantized under an external magnetic field B, according to the Lifshitz-Onsager quantization rule Here A n is the extremal cross-sectional area of the Fermi surface (FS) related to the Landau level (LL) n; φ B is a geometrical phase; δ is a phase shift determined by the dimensionality, taking the value δ= 0 (δ= ±1/8) for the 2-dimensional (3-dimensional) case 22 .Therefore, the value of 1/B can be indexed by the Landau index n.Figure 3d plots the 1/B against n for the Sr 0.06 Bi 2 Se 3 single crystal sample under both the B⊥ab and B ab directions.The linear fitting gives an intercept of -0.136 for both the B⊥ab and B ab cases.This intercept is far away from ±1/2, which is probably due to the combined contribution to the conductivity from both the surface state and the bulk.It is interesting to notice that the intercept is very close to -1/8.The fact that the value is close to -1/8 is consistent with the 3D nature of the system, indicating a non-trivial Berry's phase for the spin-split Fermi surface 22 .However, more accurate identification of the surface state and the bulk should be performed by using the angle-resolved photoemission spectroscopy and scanning tunneling microscopy studies.C/h.After that, the quartz tubes were taken out and the samples were quenched down into water ice.The resultant crystals are easily cleaved along the basal plane leaving a silvery shining mirror like surface (see Fig. S1).The typical dimensions of the single crystals are 3×2×0.5mm 3 .The actual composition of the crystal was determined using energy dispersive x-ray spectrometry (EDX) analysis, which was performed using Oxford SWIFT3000 spectroscopy equipped with a Si detector.The EDX measurements were done on more than ten pieces of single crystals.For each piece, about twenty different points were randomly selected in the EDX measurements and the average was defined as the actual composition.
The obtained crystals were characterized by powder x-ray diffraction (XRD) and x-ray single crystal diffraction (Fig. S2) with Cu K α radiation at room temperature.The temperature dependence of resistivity from 1.9 K to 300 K was measured by a standard four-probe method in a commercial Quantum Design physical property measurement system (PPMS-14 T) system.
Magnetic properties were performed using a superconducting quantum interference device magnetometer under both the He3 (with the lowest temperature of 0.5 K) and He4 (with the lowest temperature of 1.9 K) cryostats.The magnetization data were collected with the applied magnetic field parallel to the shining surface of the samples.The applied magnetic field is 2 Oe.One important thing which we should point out is that the data collected under He4 environment gave relatively higher shielding volume fraction comparing to those collected under He3 (the same sample).For example, at 2 K, the shielding volume fraction measured under He3 is about 23%, while it is about 31% for He4 case (Fig. S3).The divergence between the He3 and He4 cases is reproducible for more than ten times of measurements.At present, we cannot give a good explanation for this strange behavior.In the manuscript we use the data collected under He3 environment.We also give the data collected under He4 below as a supplementary result.
The quantum oscillation experiments were performed on the Cell5 Water-Cooling Magnet of High Magnetic Field Laboratory of Chinese Academy of Sciences.The measurements were done using a field-sweeping method, with the fixed temperature at 350 mK, 1.5 K, 3 K, 10 K, 40 K, and 70 K.
The maximum magnetic field is 33 Tesla.
is a high resolution tunneling electron microscopy (HRTEM) image of the sample, and it shows clearly the equally spaced lattice fringes.The calculated fringe separation is 3.035 Å, which corresponds to the d-spacing of the (015) plane of rhombohedral Sr 0.06 Bi 2 Se 3 .The clearly distinguishable lattice fringes in HRTEM image indicate the high crystallinity of the sample.Figure 1d shows an atomic resolution tunneling electron microscopy image of the sample taken along [001] lattice direction.The rhombohedral array of Bi(Sr) atoms are clearly seen without any stacking defects.These facts suggest that Sr ions are substantially incorporated into the Bi 2 Se 3 lattice without any lattice mismatch.
3 T.The observed angle dependence of frequency is clear evidence for a closed ellipsoidal Fermi surface pocket, similar to that observed in Bi 2 Se 3 and Cu-intercalated Bi 2 Se 3 samples 14, 18-21 .According to the Onsager relation, the frequency of the SdH oscillation as a function of inverse magnetic field is F = h 2πe A(ε F ) with A(ε F ) being the maximal cross-sectional area of the Fermi surface in a plane perpendicular to the magnetic field.Thus we get A xy (ε F )∼1.27 nm −2 and A xz (ε F )∼2.23 nm −2 .The Fermi momentums are determined to be k x F =k y F =0.64 nm −1 High resolution tunneling electron microscopy (HRTEM) measurement was performed using a JEOL-2010 transmission electron microscope.The point-to-point resolution is 0.19 nm.Prior to the HRTEM measurement, the Sr 0.06 Bi 2 Se 3 single crystal samples were ground into fine powder specimens.The specimens were then loaded into a copper grid which serves as the sample holder during the HRTEM measurement.The applied accelerating voltage is 200 kV in the measurement.Both the HRTEM images of the specimens and the electron-diffraction patterns were taken.Atomic resolution tunneling electron microscopy measurement was performed using a JEOL-ARM-200F microscope which offers resolution of 0.08 nm at 200 kV.The operational procedure is similar to the HRTEM measurement.ofSciences (Grant No. 2015SRG-HSC025), and the National Natural Science Foundation of China (Grant Nos.11174290 and U1232142).
Figure 1 :
Figure 1: Crystal structure of the Sr 0.06 Bi 2 Se 3 sample.a, Powder x-ray pattern of the
Figure 2 :
Figure 2: Superconducting properties of the Sr 0.06 Bi 2 Se 3 sample.a, Temperature dependence
Figure 3 :
Figure 3: Quantum oscillations of the Sr 0.06 Bi 2 Se 3 sample.a, Magnetic field dependence of
(-
ab 40 K B⊥ ab 10 K B⊥ ab 0.35 K B⊥ ab 0.35 K B//ab ρ 0.04 0.05 0.06 0.07 0.08 0.09 5 K.The less Meissner fraction is a typical behavior of type-II superconductor.The magnetic data of the Sr 0.06 Bi 2 Se 3 sample suggests that bulk superconductivity with very large superconducting volume fraction can be tuned by element intercalation in typical topological insulator Bi 2 Se 3 .The nearly 100% shielding volume fraction in Sr 0.06 Bi 2 Se 3 is of great importance: It means that the Sr 0.06 Bi 2 Se 3 sample is almost homogeneous at large scale and the whole sample is superconduct-Since the superconductivity in Sr 0.06 Bi 2 Se 3 is induced by intercalation of Sr in the wellknown topological insulator Bi 2 Se 3 , it is of particular interest to investigate the possible change of the Fermi surface topology in the Sr 0.06 Bi 2 Se 3 superconductor with respect to that in the Bi 2 Se 3parent compound.Thus we perform a quantum oscillation measurement on the Sr 0.06 Bi 2 Se 3 sample.Figure3agives the magnetic field dependence of in-plane resistivity (ρ xx ∼B curves) of the Sr 0.06 Bi 2 Se 3 sample measured at different temperatures and different field directions (B⊥ab and B ab).At 0.35 K, the pronounced Shubnikov-de Haas (SdH) oscillation can be observed when the magnetic field is larger than 7 T.With increasing temperature, the amplitude of the SdH oscillation decreases.When T >70 K, the SdH oscillation is very weak.A noticeable feature is that the positions of the oscillation exhibit no shift with increasing temperature, suggesting that the extremal cross section of the Fermi surface is independent of temperature.Another important feature is that the SdH oscillations are observed in both field directions, suggesting the three-dimensional (3D) behavior of the Sr 0.06 Bi 2 Se 3 system.The 3D origin of the Sr 0.06 Bi 2 Se 3 compound is quite similar to the 3D properties in Bi 2 Se 3 and Bi 2 Te 3 topological insulators 6,7, in contract to the case of Cu x Bi 2 Se 3 where only a very small fractional part of the sample could be superconducting down to the lowest achievable temperature6,7.Thus the Sr x Bi 2 Se 3 compound could be served as a model system for studying the topological superconductivity.From the M∼H curve shown in the inset of Fig.2dit can be seen that the lower critical field of the sample is very small (∼2 Oe).
19cm −3 , which is slightly less than that determined from Hall coefficient.The obtained Fermi surface of the Sr 0.06 Bi 2 Se 3 The turning of topological insulating state into superconducting state by intercalation of Sr in Bi 2 Se 3 is reminiscent of the enhancements of superconductivity by intercalation of Tl/(K,Rb,Cs) in the (Tl,K,Rb,Cs) y Fe 2−x Se 2 systems.The nearly 100% shielding volume fraction in Sr 0.06 Bi 2 Se 3 also has advantages over the superconductivity induced by intercalation of Cu or applying external pressure, making Sr 0.06 Bi 2 Se 3 the model system for studying the topological superconductivity. | 2015-08-17T06:25:47.000Z | 2015-02-04T00:00:00.000 | {
"year": 2015,
"sha1": "789d5d787cc78d0861c8ee2b0838bd7d8c590ae5",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1502.01105",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c40f661e9373dd909202f2ed6804c5075bb8bdff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry",
"Medicine"
]
} |
227068534 | pes2o/s2orc | v3-fos-license | Improving Newborn Health in Countries Exposed to Political Violence: An Assessment of the Availability, Accessibility, and Distribution of Neonatal Health Services at Palestinian Hospitals
Introduction Geopolitical segregation of Palestine has left a fragile healthcare system with an unequal distribution of services. Data from the Gaza Strip reflect an increase in infant mortality that coincided with a significant increase in neonatal mortality (12.0 to 20.3 per 1000 live births). Objective A baseline study was carried out to evaluate available resources in neonatal units throughout Palestine. Study Design A cross-sectional, hospital-based study was conducted in 2017 using the World Health Organization’s “Hospital care for mothers and newborn babies: quality assessment and improvement tool.” Data on the main indicators were updated in 2018. Results There were 38 neonatal units in Palestine: 27 in the West Bank, 3 in East Jerusalem, and 8 in the Gaza Strip. There was an uneven geographic distribution of incubators in relation to population and births that was more marked in the Gaza Strip; 79% of the neonatal units and 75% of the incubators were in the West Bank. While almost all hospitals with neonatal units accepted very and extremely low birth weight and admitted out-born neonatal cases, there was a shortage in the availability of incubators with humidifiers, high-frequency oscillatory ventilation, mechanical ventilators with humidifiers and isolation wards. There was also a considerable shortage in neonatologists, neonatal nurses, and pediatric subspecialties. Conclusion Almost all the neonatal units accepted extremely low birth weight neonatal cases despite not being ready to receive these newborns due to considerable shortages in human resources, equipment, drugs, and essential blood tests, as well as frequent disruptions in the availability of based amenities. Together, these factors contribute to the burden of providing quality care to newborns, which is further exacerbated by the lack of referral guidelines and challenges to timely referrals resulting from Israeli measures. Ultimately, this contributes to suboptimal care for neonates and negatively impacts future health outcomes.
Introduction
The neonatal period spans the first 28 days of a newborn's life. It is considered an integral indicator of future child survival and well-being, 1,2 as well as sustainable social and economic development at the broader level. 3 Globally, the neonatal mortality rate, defined as the probability of dying in the first 28 days of life, was 18 deaths per 1,000 live births in 2017. This included approximately 2.5 million newborns who died within the first month, with the highest percentage dying in the first week. Specifically, 36% died on the day they were born, and three-quarters died during the first week of life. 3 Between 2000 and 2017, the neonatal mortality rate decreased by 41%. This decline was less than the reduction in the mortality rate among children aged 1-4 years old (reported as 60%). 3 Despite the decrease in the neonatal mortality rate, there are two main problems with this trend: the rate of decline is slower than in any other reported period, 2 and the reduction in the neonatal mortality rate has not been equal among different countries. 2 Specifically, between 1990 and 2017, there was a 47% reported decline in the neonatal mortality rate in developing countries as compared to a 58% decline in developed countries.
Given the global burden of preventable neonatal deaths, neonatal health has become a priority under the United Nations Sustainable Development Goals (SDGs). SDG 3.2 urges all countries to reduce neonatal mortality to 12 neonatal deaths per 1,000 live births by 2030 (target 3.2.2). 2 Evidence indicates that a lack of accelerated action towards neonatal health will result in the death of 28 million newborns between 2018 and 2030. 3 It is therefore imperative to identify, understand, and address the causes of neonatal death, especially in low-and middleincome countries. 1 It is equally essential to identify and overcome barriers to effective case management to improve neonatal and child survival rates and formulate appropriate policies on child health and well-being. 1 As it currently stands, the burden of neonatal mortality is unequally distributed across regions: it is most pronounced in low-and middle-income countries. 4,5 where 99% of the neonatal deaths occur. 6 Many factors contribute to this trend in neonatal morbidity and mortality. The first factor, which has been termed the 3-delays model, outlines three "delays" that impact pregnancy-related mortality and constitute major challenges to improving child survival rates. The 3-delays include a delay in recognizing a problem and seeking healthcare (low utilization of services); a delay in reaching the health facility; and a delay in receiving appropriate care at the health facility. A second factor is the migration of the health workforce from low-income to high-income countries, creating a disproportionate gap in the availability of health professionals. This gap is further exacerbated by a third factor, which is the lack of sufficient supplies, equipment, and resources to provide basic care to neonates and respond to projected needs, such as neonatal resuscitation. Other factors include a lack of prevention strategies that correspond to neonatal health needs; weaknesses in existing health systems in terms of equitable access to services; and inadequate knowledge, skill, and experience of healthcare workers to respond appropriately to neonatal health ailments. 6 As a developing middle-income country, Palestine mirrors some of the same challenges faced globally in neonatal health. However, it is challenging to examine trends in neonatal mortality in the country because of the incompleteness of the national death registry. 7 The neonatal mortality rate in Palestine in 2017 (11.3 deaths per 1000 live births) is underreported. According to the Palestinian Central Bureau of Statistics, the death registry was only 60.2% complete in 2013, primarily due to the underreporting of infant deaths, for which completeness in the death registry was estimated to be 25.6%. 8 As a result, in 2013, while the reported infant mortality rate was 5.66 deaths per 1000 infants, the estimated mortality rate was 18.11 deaths per 1000 infants. 8 Based on a 2015 United Nations Relief and Works Agency (UNRWA) study in the Gaza Strip (GS), where 67% of the population are refugees and are served by UNRWA, between 2008 and 2013, infant mortality increased from 20.2 per 1000 live births to 22.4 per 1000 live births. This change reflected a significant increase in the neonatal mortality rate (from 12.0 to 20.3 per 1000 live births, p = 0.01). The main causes of death were preterm birth, congenital anomalies, and infections. 9 Significant challenges exist regarding the scaling up of service delivery and the enhancement of quality, integration, and continuity of neonatal care. Healthcare delivery in Palestine faces inequality resulting from disparities in the availability of healthcare services, which is due in part to the geopolitical segregation imposed by the presence of multiple checkpoints and the separation wall, as well as the inadequate distribution of services. 10 The legislative and physical division of Palestine, in terms of both the separation of the GS from the West Bank (WB) and the fragmentation of the occupied WB, present major difficulties for the cohesiveness of the health system and access to staff, ambulances, patients, and patients' relatives. 11,12 East Jerusalem (EJ), has been isolated from the remainder of the WB and is under full control of Israel. For Palestinians living in the WB and GS, EJ is largely inaccessible. The GS has been under an illegal Israeli blockade submit your manuscript | www.dovepress.com
DovePress
Journal of Multidisciplinary Healthcare 2020:13 for over 12 years. As such, the healthcare system in Gaza is comprised of captive clients who are entirely dependent on Israel, international bodies, and the aid industry for goods and services, with no means of independent development. 13 In addition, the fragmented health sector with multiple actors often challenges effective alignment with the overall national health development agenda. 14 This is the first study to assess neonatal services in Palestine that will be used as a baseline to inform policy with the goal of improving neonatal services as one of the strategies to reduce infant and neonatal mortality. It aims to assess the availability, distribution, and accessibility of neonatal health services in Palestinian hospitals.
Study Design and Setting
A cross-sectional, quantitative study was conducted at Palestinian hospitals from June to August 2016, and the main indicators were updated in August 2018. The study covered all Palestinian governmental, non-governmental, UNRWA, and private sector hospitals in the WB (including EJ) and GS that offer neonatal health services. The study survey was based on the international standardized tool: "Hospital care for mothers and new-born babies: quality assessment and improvement tool," developed by the WHO and first published in 2009. 15 The tool was used in many regions, including Africa, Europe, and the Middle East to evaluate the quality of care in hospitals, identify areas of improvement, and develop future action plans. 6 The tool was adapted for the current study following consultations with experts in neonatal and pediatric care working in Palestine.
The fieldwork consisted of a preparatory phase and a data collection phase. During the first phase, a workshop was conducted with key health service providers, including heads of hospitals and heads of neonatal units or their representatives. During the workshop, the goal of the study and its importance as a baseline study at the national and hospital levels were presented, along with the risks and benefits of participating in the study. The data collection tool was introduced and feedback from the participating hospitals was considered in refining it. All hospitals from the workshop agreed to participate in the study, and informed oral consent was obtained after explaining the study objectives and measures to protect confidentiality, and giving them the right to decline from participating in the study. Each hospital assigned a focal point to coordinate with key persons in each hospital to collect the information in the study tool, as outlined below: -A focal point from the administrative office to collect data on neonatal admission load, specific neonatal and delivery statistics, and neonatal medical admission causes in 2015 -Head nurse of the neonatal unit to examine availability and number of neonatal staff, specifically their shifts, credentials, and training, and the continuous availability of amenities (electricity, back-up power supply, diesel/gas, running water, hot water, heating and air-conditioning) -Head physician of the neonatal unit to collect information on availability of beds, equipment, and supplies for hospitals and neonatal units, protocols and guidelines in delivery and neonatal departments, availability of basic and support facilities (isolation wards, facilities for mothers to breastfeed newborns, and facilities for breast milk expression) -Pharmacist to examine the availability of drugs and total parenteral nutrition (TPN) for very sick neonates -Lab supervisor in each hospital to collect information on available laboratory tests The study was complied with the Declaration of Helsinki. To protect confidentiality, hospital identifiers were excluded from data analysis. Following the data collection phase, all personal identifiers of the interviewed health service providers who provided information for different sections of the tool were destroyed. There was no risk of breach of confidentiality for individual hospitals as data pertaining to each hospital was only included in the individual reports, which were sent to each hospital separately. The data presented in this paper include aggregated data from all the participating hospitals. The study proposal and the verbal informed consent process was approved by the Ministry of Health Ethical Review Committee in January 2016.
Outcome Measures
The main outcome measures of the study were: -Number of deliveries, neonatal admissions, and referrals in 2016 (collected from registries available at the hospitals) -Availability of • Facilities: NICU unit, emergency department, isolation ward, facility for out-born admission • Services: in-born admission, out-born admission, very low birth weight admission, extremely low birth weight admission -Hospital level: An additional outcome measure was to identify a hospital levelling score for each neonatal unit in accordance with a regionalized care grading system for neonatal health services (basic neonatal care vs specialty neonatal care vs sub-specialty neonatal intensive care) ( Table 1). The scoring was based on the "American Academy of Pediatrics, Guidelines for Perinatal Care, Seventh Edition," and adapted to the Palestinian context. 16 This score was used to assess neonatal unit readiness and its capability to accept referrals or its need to refer its neonates to another neonatal unit to ensure optimum neonatal care.
Statistical Analysis
Analyses were conducted at the hospital and neonatal unit levels. For descriptive categorical outcome measures, proportions (%) of the total number of neonatal units and of all hospitals, where appropriate, were computed. For categorical outcome measures, data were presented as proportions (%). For descriptive continuous outcome measures, data were presented in terms of absolute numbers or sums, where appropriate. Each outcome measure was stratified by region (WB excluding EJ, EJ, and GS), geographic area (north, central, and south of the WB, EJ, and north, central, and south of the GS), and sector (government or nongovernmental). Descriptive data were presented to map the current situation of neonatal services in Palestine. The Statistical Package for the Social Sciences (SPSS) software, version 23 was used for data analysis.
General Characteristics of Palestinian Hospitals Offering Neonatal Services
In Palestine, there were 60 delivery hospitals and 38 hospitals with neonatal health units. Around 60% (22/38) of hospitals with neonatal units were non-governmental. The south of the WB had fewer hospitals with neonatal units; almost half of the hospitals with neonatal units (14/ 30) were in the north of the WB. In the GS, only one hospital with a neonatal unit was located in the north, while the other seven hospitals with neonatal units were distributed in the middle and southern areas (see Figure 1).
Burden of Maternal and Neonatal Healthcare Large Number of Deliveries and Neonatal Admissions
Based on the registries available at the hospitals, the number of deliveries (normal and C-section) in Palestine for 2016 was reported to be 127,504 (66,696 deliveries in the WB, excluding EJ, 7424 in EJ, and 53,384 in the GS). This included a total of 29,160 (23%) C-section deliveries (16,703 (25%) in the WB, 1680 (23%) in EJ, and 10,777 (20%) in the GS). In addition, the neonatal admission load was reported to be 16,415 neonates (7880 in the WB excluding EJ, 1887 in EJ, and 6648 in the GS). The registers did not include indicators on whether admitted neonates were born within the hospital or referred from outside.
Neonatal Medical Admission Causes
The top causes of neonatal medical admission across the WB, including EJ, and the GS were transient tachypnea of the new-born (TTN), hyaline membrane disease (HMD), neonatal hyperbilirubinemia, neonatal sepsis, and prematurity. EJ hospitals also included congenital anomalies and metabolic diseases; however, these causes were not observed in hospitals in the WB and GS.
Readiness of Hospitals for the Provision of Neonatal Health Services Facility Infrastructure
The study data indicated that 32/38 (22 in the WB (excluding EJ), 3 in EJ, and 7 in the GS) hospitals with neonatal units had an isolation facility/infectious cases ward. In addition, 18/38 (11 in the WB (excluding EJ), 2 in EJ, and 5 in the GS) hospitals with neonatal units had emergency department/ beds for neonates, while only 9/38 (6 in the WB (excluding EJ), 0 in EJ, and 3 in the GS) hospitals with neonatal units had a ward/room for out-born neonatal admission.
Support Services
Almost all hospitals with neonatal units accepted both very low birth weight (VLBW, <1500 g) and extremely low birth weight (ELBW, <1000 g) neonatal cases:
Equipment
The study data indicated that there was a shortage in the availability of high-frequency oscillatory ventilation (HFOV) across hospitals in Palestine. Also, several hospitals in the WB
DovePress
and GS did not have mechanical ventilators with humidifiers (see Table 2).
As shown in Figure 2, there was an uneven geographical distribution of incubators in relation to population and births that was more marked in the GS; 79% of the neonatal units and 75.1% of the incubators were located in the WB.
Human Resources
The total number of neonatologists reported in Palestine was 11, covering 15 full time/part-time posts: 6 in EJ, 4 in the WB, and 1 in the GS. Regarding the availability of pediatricians, there was a total of only 38 pediatricians who work in the eight neonatal intensive care units (NICUs) in the GS, while there were 85 pediatricians in the WB (excluding EJ), and 12 in EJ. There was a dearth in the availability of pediatric sub-specialists across all geographic areas in Palestine, especially in the GS and Ministry of Health (MoH, governmental) hospitals. In the GS, none of the hospitals with neonatal units had a pediatric orthopedic specialist, pediatric endocrinologist, pediatric nephrologist, pediatric metabolic specialist, pediatric pulmonologist, gastroenterologist, pediatric anesthesiologist, pediatric ophthalmologist, or pediatric cardiac surgeon.
Regarding the availability of specialized nurses, only 13.5% (86/637) of nurses working in neonatal units in Palestine were specialized neonatal nurses. Of the 86 neonatal nurses available in Palestine, 55 were in the WB and 31 were in EJ hospitals. There were no neonatal nurses in the GS. Only 21% of the nurses in the MoH units in the WB were specialized neonatal nurses qualified to work in neonatal units (see Table 3).
Regarding the availability of neonatologists, pediatricians, and nurses for neonatal care, it was reported that 23/ 27 hospitals in the WB, 2/3 hospitals in EJ, and 6/8 hospitals in the GS had neonatologists/pediatricians on duty in the delivery ward and in other units at the hospital during their assigned shift in the neonatal unit. Also, 7/27 hospitals in the WB and 2/8 hospitals in the GS had nurses who were on duty in the delivery ward in the hospital during their assigned shift in the neonatal unit. Additionally, 9/27 hospitals in the WB and 1/8 hospitals in the GS had nurses who were available to other units at the hospital during their shifts in the neonatal unit.
Drugs
Neonatal units were assessed for the availability of essential antibiotics including ampicillin, ceftriaxone/cefotaxime, cloxacillin, and gentamicin. These essential antibiotics were not completely available in nine neonatal units in the WB and five neonatal units in the GS.
Regarding the availability of infusion drugs (glucose 5%, glucose 10%, glucose 50%, glucose with sodium chloride, potassium chloride, sodium chloride 0.9% isotonic, and sodium bicarbonate), one hospital in the WB reported that it does not have all infusion drugs and none of the hospitals in the GS reported having all infusion drugs. The study indicated that there was a shortage in acyclovir, amphotericin B, fluconazole, and vancomycin in all hospitals in Palestine. Only 9/27 hospitals in the WB, 2/3 hospitals in EJ, and 2/8 hospitals in the GS reported having all the drugs. In reference to the availability of caffeine citrate and surfactant, 11/27 hospitals in the WB and 7/8 hospitals in the GS reported that they did not have one or both of the essential medications.
Nutrition, Breastfeeding Facilities, and Materials
It was reported that not all of the hospitals with neonatal units in Palestine have facilities that were effectively designed to promote breastfeeding. Only 69% of governmental hospitals had a facility for expressing breast milk, versus 64% of non-governmental hospitals. There was also a shortage in the availability of parenteral nutrition, which is critical for very low and extremely low birth weight cases. A shortage of Total Parenteral Nutrition (TPN) elements was most pronounced in WB hospitals, but there was a shortage of vitamins and minerals for TPN across all geographic areas, except for EJ. TPN was only available in 29/38 neonatal units in Palestine.
Laboratory Tests
Essential blood tests, including blood glucose, hemoglobin, full blood count, blood gas analysis, blood bilirubin, renal function tests, and electrolytes, should be available in all neonatal units. In our study, only 22/27 hospitals in the WB and 7/8 hospitals in the GS had all essential blood tests. The study indicated that all blood bank tests, including Coomb's test, blood grouping and cross-matching, rhesus antibodies, major blood groups and rhesus typing, and blood cross-matching were available in 24/27 hospitals in the WB, 2/3 hospitals in EJ, and 8/8 hospitals in the GS.
All hospitals in Palestine had all septic workup tests, including bacteriology (culture), urine culture, blood culture, cerebrospinal fluid culture, and urine analysis.
Protocols
Almost all hospitals that provide neonatal services in EJ have protocols. GS hospitals lacked protocols on the use
DovePress
and maintenance of equipment. The most common protocol sources were international, followed by local or hospital-based protocols, and national or MoH protocols. Of the protocols that followed international standards, the main sources were the American Academy of Pediatrics (AAP), the Harriet Lane Handbook, and the Nelson Textbook of Pediatrics.
Amenities
The results of the study indicated that basic amenities, including electricity, back-up power supply, running water, heating, and air-conditioning, were consistently available in only 19 of the 38 neonatal facilities in Palestine.
Levelling of Neonatal Units
Based on the study findings, the distribution of the hospital levelling score of the neonatal units was as shown in Figure 3. There was only one level 4 neonatal unit, located in EJ. In the GS, there was only two level 3 units, located in the middle area.
Discussion
Our study found a shortage of resources, unequal distribution of neonatal services, and barriers to accessing neonatal services in Palestine. Together, these findings contribute to the burden of providing quality care to newborns, which is further exacerbated by the lack of referral guidelines to level 3 and level 4 hospitals. These factors compound the existing challenges posed by Israeli checkpoints and the separation wall in referring newborns in a timely manner. Ultimately, this contributes to suboptimal care to neonates and positive future health outcomes.
Availability of Facilities and Resources
While almost all neonatal units in both the WB and GS admit ELBW infants, many are not prepared to receive very sick and VLBW newborns due to the considerable shortage of neonatologists, neonatal nurses, pediatric subspecialty practitioners, equipment, drugs, and essential blood tests, especially in the GS. This category of infants requires at least level 3 NICU care with appropriate staff and resources, which must be considered when making To improve survival and health outcomes for referred cases, neonates need to be transported from the delivery room to wards, from the ward to other services within the hospital, and between hospitals using a transport incubator and with the support of a dedicated team capable of implementing life support for babies. Additionally, most of the hospitals with neonatal units admit out-born neonatal cases despite not having a ward/ room for out-born neonatal admission or isolation wards. Because the isolation of cases is the main infection prevention measure among newborns, [17][18][19] the absence of isolation wards is a contributing factor for healthcareassociated infections. 20 Furthermore, not all hospitals have TPN to support the nutrition of very sick newborns. If the preterm infant meets the criteria to receive TPN, then it should be administered immediately, ideally within 6 hours of birth. 21 Regarding the availability of equipment, based on the American Academy Guidelines and the number of beds available, only the GS has a shortage of beds with only around half of the beds needed based on live births. This is reflected in the quality of care in neonatal units in Gaza and may affect the morbidity and mortality of neonates, especially tiny and sick ones. The low number of beds also results in overcrowding, with some units experiencing more than 150% occupancy. In addition, not all units have mechanical ventilators with humidification. Inappropriate humidification can lead to mucosal injury, desquamation of cells, excessive pulmonary secretion, and reduced vital capacity. 14 These units may face difficulties caring for sick newborns and tiny babies who require prolonged ventilation. A blood gas analyzer and portable X-ray machine are also essential components for the management and monitoring of very sick newborns requiring mechanical ventilation.
Based on the study findings, basic amenities were consistently available in half the neonatal facilities in Palestine. However, basic amenities must be available at all times in neonatal units for ventilation, feeding, and care for sick and premature newborns. A lack of basic amenities exacerbates the shortage of neonatal beds, as an NICU bed cannot function and may put neonates at risk of morbidity and mortality. As a result, the remaining 19 facilities should be disregarded as providers of NICU, although this may also accentuate the lack of neonatal beds in Palestine.
Accessibility to Neonatal Services
Study findings indicate that there is an uneven distribution of neonatal units among different geographical areas in both the WB and GS. This has serious implications for access to care due to geopolitical segregation and the siege on the GS. 22 In addition, most delivery and neonatal services are outside of the MoH. This significantly impacts the cost for neonatal care and emphasizes the need for referral guidelines. For example, in 2016, there were 1793 referrals for neonatal services outside the governmental sector at an estimated cost of 10 million USD. 23
Distribution of Neonatal Health Services at Palestinian Hospitals
Neonatal services are not adequately geographically distributed to match the population and birth rate. Most neonatal services are outside the GS, and most neonatal units in Gaza are level 1 and 2. In addition to shortages in specialized services and equipment in Gaza, the ongoing siege has severely increased the barriers to accessing timely and quality neonatal care.
The only level 4 hospital is located in EJ, and is largely inaccessible to patients from the WB and GS who require a permit from the Israeli authorities to enter, which is not guaranteed. Additionally, in the WB, neonatal services are mostly in the north and middle WB, which contributes to the shortage of beds in the south and delays accessing neonatal care due to checkpoints.
Limitations
The study had several limitations. First, credentials of the neonatologists were not assessed. Second, the study only examined the availability of resources, not actual quality of care. However, this study is unique in several respects, including that it is the first assessment of neonatal health services in Palestine. In addition, the results of the study could potentially reflect similar challenges in other lowincome countries where resources are scarce and political challenges hinder the delivery of quality health services. Likewise, outcomes and initiatives resulting from the study can provide examples of methods to address these challenges in similar contexts. For example, the study resulted in the creation of a leveled system of neonatal units in Palestine and was used as a guideline for decision makers to identify resources available in the country and build the neonatal-perinatal referral protocol accordingly.
The protocol will ensure the provision of standard care for newborns and is the first step towards the regionalization of perinatal and neonatal healthcare in Palestine. The ultimate goal is the creation of a Palestinian neonatal network to optimize outcomes, provide the best possible care for patients, and ensure high standards that will significantly aid in decreasing the burden on the healthcare system.
Conclusion
To ensure quality care, neonatal units with different assigned levels of service should have the required number of specialists, adequate equipment, supporting facilities, and guidelines for the referral of pregnant women and newborns. Our study found that there is a shortage in the availability of key components of neonatal care (such as incubators) as well as barriers to access to neonatal services. This was coupled with an unequal distribution of neonatal services across the WB, GS, and EJ, hindering the optimal delivery of care to neonates. This is vital for the GS in particular because of the ongoing siege and the challenge of referring patients outside the region. Due to the siege on the GS, separation of EJ from the WB, fragmentation of the WB by many checkpoints and road barriers, limited resources, and the high costs of referrals outside the MoH, as well as the need for timely management of high-risk pregnancies and very sick newborns, there is an urgent need to strengthen neonatal health services in Palestine. In order to have effective and high-quality neonatal services in place, interventions should target four primary issues: in addition to the challenges to timely referral posed by Israel, it is vital to improve regionalization and build an effective referral system. Study findings on resource availability indicate that certain hospitals should or should not accept certain cases of neonates. There is also a need to develop a referral system for pregnant women and neonates based on the proposed levelling of neonatal units. 4. As there is only one level 4 hospital in Palestine, and only 12 out of the 38 units are level 3 units, and given that almost all accept outborn neonates, there is a need to develop an effective transport system. Ambulances are poorly equipped to transport patients, and there is currently no dedicated neonatal transport service. | 2020-11-19T09:14:22.063Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "2a057a753b2ad69a4b80e46e1cb593d5fa09b0b1",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=63701",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c24d36e58c874365133f859eab5000618ddb78f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251367191 | pes2o/s2orc | v3-fos-license | A Novel m6A-Related LncRNA Signature for Predicting Prognosis, Chemotherapy and Immunotherapy Response in Patients with Lung Adenocarcinoma
N6-methyladenosine (m6A) and long non-coding RNA (lncRNA) have been associated with cancer prognosis and the effect of immunotherapy. However, the roles of m6A-related lncRNAs in the prognosis and immunotherapy in lung adenocarcinoma (LUAD) patients remain unclear. We evaluated the m6A modification patterns of 695 samples based on m6A regulators, and prognostic m6A-related lncRNAs were identified via a weighted gene co-expression network analysis. Twelve abnormal m6A regulators and nine prognostic lncRNAs were identified. The tumor microenvironment cell-infiltrating characteristics of three m6A-related lncRNA clusters were highly consistent with the three immune phenotypes of tumors, including immune-excluded, immune-inflamed and immune-desert phenotypes. The lncRNA score system was established, and high lncRNA score patients were associated with better overall survival. The lncRNA score was correlated with the expression of the immune checkpoints. Two immunotherapy cohorts supported that the high lncRNA score enhanced the response to anti-PD-1/L1 immunotherapy and was remarkably correlated with the inflamed immune phenotype, showing significant therapeutic advantages and clinical benefits. Furthermore, the patients with high lncRNA scores were more sensitive to erlotinib and axitinib. The lncRNA score was associated with the expression of miRNA and the regulation of post-transcription. We constructed an applied lncRNA score-system to identify eligible LUAD patients for immunotherapy and predict the sensitivity to chemotherapeutic drugs.
Introduction
In 2021, lung cancer accounted for one-quarter of all of the cancer-related deaths on a global scale [1], and nearly 40% of all of the lung cancer cases fall into non-small cell lung cancer (NSCLC) [2]. Despite significant advances in cancer therapy, such as radiation therapy, chemotherapy, surgical resection and immunotherapy, which have made considerable progress in prolonging the survival of patients, the long-term prognosis for these patients remains unsatisfactory [3]. Therefore, it is essential to discover novel biomarkers and comprehensive insights into the mechanism for predicting an efficacious therapy for lung adenocarcinoma (LUAD). m6A, the methylation modification at the sixth position of the nitrogen atom of adenosine, is the most abundant modification of RNA. The m6A modification regulates the transcription, stability, splicing, degradation, localization, transport and translation of RNA [4,5]. The m6A modification is reversible and mediated by three types of regulators, including methyltransferases (writers), demethylases (erasers) and methylation recognition enzymes (readers). Therefore, m6A modification and regulators play vital roles in the carcinogenesis and the development of cancers, while novel mechanisms of the m6A modification remain largely unknown.
As crucial regulators in epigenetics, accumulating evidence has revealed that the long non-coding RNAs (lncRNAs) affect numerous biological processes with diverse mechanisms, including cell proliferation, metastatic progression [6], apoptosis [7] and the stemness and modulation of metabolism [8], especially in cancers. Moreover, the intracellular functions of the lncRNAs are mediated by the m6A regulators, indicating a complex and multiple interaction between the molecules. The lncRNA PRADX peroxiredoxin 1 (PRADX) promotes the nuclear factor-κB (NF-κB) activity via the UBX domain protein 1 (UBXN1) suppression, inducing the tumorigenesis of glioblastoma and colon adenocarcinoma by interacting with the enhancer of zeste homolog 2 (EZH2) [9]. Thus, the further identification of the m6A-related lncRNAs and an exploration of their functions in malignancies are imperative.
Immune checkpoint blockade (ICB) therapies, such as monoclonal antibodies against programmed death 1 (PD-1) or programmed death ligand 1 (PD-L1) and cytotoxic Tlymphocyte associated protein 4 (CTLA4), have achieved unprecedented efficacy in a wide range of malignancies through boosting the immune system to fight cancer. Notably, it has been shown that pembrolizumab is related to remarkably prolonged overall survival and a progression-free survival (PFS) duration in patients with advanced NSCLC patients, as well as PD-L1 expression on a minimum of 50% of tumor cells in contrast to platinum-based treatments [10]. Although the effect of treatment for lung cancer patients has been improved with the application of ICB-based immunotherapies, only a small proportion of individuals may gain benefit from immunotherapy. Hence, it is critical to predict and identify the best candidates for immunotherapy and provide individualized drug treatment.
Our study identified 12 differentially expressed m6A regulators, based on the expression between LUAD and the adjoining normal tissues. Nine hub m6A-related lncRNAs were detected from a key module by a weighted gene co-expression network analysis (WGCNA) and univariate Cox regression. We successfully identified three distinct m6Arelated lncRNA subgroups, as well as three distinct lncRNA-related gene subtypes. The tumor microenvironment cell-infiltrating characteristics of the three m6A-related lncRNA clusters were highly consistent with the three immune phenotypes of the tumors. Moreover, the lncRNA score was constructed to predict the lncRNA modification in individuals and validated to anticipate the response to anti-PD-1/L1 immunotherapy and chemotherapeutic drugs. The lncRNA score was highly correlated with the expression of miRNA and the regulation of post-transcription. Therefore, our research established an applied scoring scheme, based on the m6A-related lncRNAs, to identify the LUAD patients who are eligible for immunotherapy and to predict sensitivity to chemotherapeutic drugs.
Data Acquisition
The Gene-Expression Omnibus (GEO) and the Cancer Genome Atlas (TCGA) databases were searched for the purpose of acquiring the LUAD RNA expression profile, along with the corresponding complete clinical annotations. A LUAD cohort, GSE43458 [11] containing 110 patients, was included for further analysis while two immunotherapy cohorts (IMvigor210 [12] and GSE78220 [13]) were also involved in our analysis. Table S1 (Supplementary Materials) provides a list of the cutoff thresholds that we used for the present research. The targeted mRNAs of the miRNAs were evaluated by FunRich 3.1.3. http://www.funrich.org/ (accessed on 25 February 2022) and the targeted signaling pathways of the miRNAs were enriched by the Kyoto Encyclopedia of Genes and Genomes (KEGG). Alternative polyadenylation (APA) was downloaded from the Cancer 3 UTR Atlas (TC3A). http://tc3a.org (accessed on 28 March 2022) [14] and the alteration of the APA usage in each tumor can be quantified as a change in the distal poly(A) site-usage index (PDUI), identifying 3 UTR lengthening (positive index) or shortening (negative index) [15].
WGCNA
One thousand lncRNAs were chosen by median absolute deviation (MAD) to establish a co-expression network with the WGCNA package in R software to explore the relationship between the modules and m6A regulators. Following the deletion of the outliers at a cutoff threshold of 35 and with a minimum sample size of 50, the data were subjected to clustering with a hierarchical clustering algorithm. With the blockwise Modules function of the "WGCNA" package in R software, an unsigned network was created with the softthreshold power adjusted to 5, the cut height adjusted to 0.1 and the minimum module size adjusted to 30 for the purposes of network formation and module detection.
Unsupervised Clustering for 9 LncRNAs and Principal Component Analysis (PCA)
The R package "limma" was utilized to standardize the data and identify lncRNAs with the prognostic values. The "ConsensusClusterPlus" package was used to conduct an unsupervised clustering algorithm on the lncRNAs for the purpose of classifying the LUAD patients into distinct subtypes based on the results of the study [16]. The number of clusters (K) and their stability were determined by the consensus clustering algorithm. The R package "PCA" was employed to verify the results of the grouping.
Gene Set Variation Analysis (GSVA)
To explore the different biological functions between the lncRNA subtypes, we conducted GSVA using the "GSVA" package in R software.
Identification of Differentially Expressed Genes (DEGs) between LncRNA Subtypes
To reveal the lncRNA-related genes, we classified patients into different gene subtypes based on the expression of genes. The empirical Bayesian approach of "limma" R package was applied to determine the DEGs between the different gene subgroups.
Establishment of LncRNA Score
We constructed a scoring system to quantitively determine the lncRNA-associated pattern in the individual LUAD patients and the lncRNA phenotype-related gene signature was named the lncRNA score. The genes with a prognostic significance were identified with the analysis of the Cox regression model. For the purpose of identifying the overlapping DEGs and classifying the patients into distinct subsets, an unsupervised clustering technique was utilized. The clusterProfiler R package was adopted to annotate the gene patterns. To define the number of clusters and their stability, the consensus clustering algorithm was applied. For the gene-expression analysis normalized by TPM methods, the expression of each gene in a signature was first transformed into a z-score. The lncRNA score was then constructed by separating the principal components (PC) 1 and 2 that were extracted to serve as the signature score. Subsequently, we computed each patient's lncRNA score using a method similar to that which was used in the previous studies [17]: i indicates the expression of lncRNA-related genes. The patients were further classified into the low and high lncRNA score group according to the median score.
Mutation Profiles
The significantly mutated genes between the two low and high lncRNA score groups and the interaction effect of the gene mutations were analyzed with the maftools. The total number of nonsynonymous mutations in the TCGA-LUAD cohort was examined to determine the tumor mutation burden (TMB).
Prediction of Chemotherapeutic Drugs
To evaluate the different sensitivities to the chemotherapeutic agents for the high and low lncRNA score subgroups, the pRRophetic algorithm was conducted to predict the 50% inhibiting concentration (IC50) value of the 138 drugs, based on the Cancer Cell Line Encyclopedia (CCLE) [18].
Statistical Analysis
The concentrations of RNA in the tumor tissues and the adjoining normal tissues were compared by a Wilcox test. By performing the Kaplan-Meier analysis in conjunction with a log-rank test, we compared the OS of the various groups. The Cox regression of OS was conducted on the univariate data to discover the prognosis-related molecules. The R software (version: 4.0.5) was utilized for the purpose of conducting all of the analyses of statistical data and used for the generation of figures. All of the statistical tests were performed using a double-sided design, with p < 0.05 serving as the criterion for determining statistical significance. Figure S1 (Supplementary Materials) depicts the workflow of the present research. In the TCGA-LUAD cohort ( Figure 1A) and the GSE43458 dataset ( Figure 1B), the levels of METTL14, ZC3H13, FTO and ALKBH5 were consistently lower, while the levels of RBM15, YTHDF1, YTHDF2, HNRNPC, LRPPRC, HNRNPA2B1, IGF2BP3 and RBMX were higher in the tumor tissues as opposed to the adjacent tissues. Therefore, we selected 12 abnormally expressed m6A regulators for further detailed analysis.
m6A-Related LncRNAs Associated with the Prognosis of LUAD
Increasing evidence demonstrated that the lncRNAs play the key role in the progression of, and immunotherapy for, cancers [19], while the lncRNAs are regulated by the m6A regulators [20]. To elicit the correlation between the m6A regulators and lncRNAs, we performed the WGCNA on the TCGA-LUAD cohort, incorporating differentially expressed lncRNAs to identify the key module most closely related to the m6A regulators ( Figure 1C). As shown in Figure 1D, beta (β) = 4 (scale-free R2 = 0.79, slope = −1.7) was set as the softthreshold. A total of five modules were obtained after merging similar modules ( Figure 1E). As shown in a heatmap of the module-trait relationships, the turquoise module containing 438 lncRNAs was considered as a novel module, and it was the most positively correlated with th em6A regulators including writers, erasers and readers ( Figure 1F; Table S2, Supplementary Materials). Moreover, the turquoise module had the greatest module significance in all of the modules with the m6A writers ( Figure S2A, Supplementary Materials), erasers ( Figure S2B, Supplementary Materials) and readers ( Figure S2C, Supplementary Materials), which indicated a strong correlation with the m6A modification. The correlation coefficient, as well as the p-value between the module membership and gene significance, were 0.91 and 8.2 × 10 −169 , respectively ( Figure 1G). Hence, the turquoise module was the most positive module with the m6A regulators. To further determine the prognosis-related lncRNAs from the turquoise module, we performed a univariate Cox regression analysis and nine lncRNAs were detected ( Figure 1H). High levels of the nine lncRNAs were significantly related to low OS rates in LUAD patients ( Figure S2D, Supplementary Materials). Therefore, the nine m6A-related lncRNAs were identified with the prognosis of LUAD. the correlation coefficient and p-value; (G) Scatter plot of lncRNAs in turquoise module; (H) Univariate Cox regression analysis for 9 lncRNAs from the turquoise module. * p < 0.05, ** p < 0.01, *** p < 0.001.
Three LncRNA Clusters Were Highly Consistent with the Three Immune Phenotypes
By conducting unsupervised clustering according to the levels of the nine lncRNAs, the patients from the TCGA-LUAD were divided into three subtypes, named lncRNA clusters A/B/C ( Figure S3A-C, Supplementary Materials). The PCA results determined that a relatively evident distinction existed in the three clusters ( Figure 2A). A better prognosis was indicated for the lncRNA cluster C than for the lncRNA clusters A/B ( Figure 2B). In addition, the heatmap indicated the clinicopathological implications and the levels of the nine lncRNAs ( Figure 2C), while the expression of the 23 m6A regulators differed remarkably in the three clusters ( Figure 2D).
To identify the biological roles of the three lncRNA clusters, GSVA enrichment pathways were conducted. Compared with the lncRNA clusters A and B, the lncRNA cluster C was associated with immune full activation including the B and T cell receptor-signaling pathway, the chemokine signaling pathway, the natural killer cell-mediated cytotoxicity and the Toll-like receptor signaling pathway ( Figure 2E,F). In addition, the lncRNA cluster C was rich in the infiltration of the various activated immune cells ( Figure 2G). Considering a matching survival advantage, the lncRNA cluster C was classified as an immune-inflamed phenotype, characterized by adaptive immune cell infiltration and immune activation. Even though the lncRNA cluster A was correlated with the immune suppression process ( Figure 2E), lncRNA cluster A was relatively highly correlated with the innate immune cells, including macrophage, mast cell, monocyte, natural killer, eosinophil and MDSC ( Figure 2G). Strikingly, the lncRNA cluster A was extremely associated with the TGF-β family member and TGF-β family member receptor ( Figure 2H). Numerous studies revealed that the immune-excluded phenotype was characterized by the presence of abundant immune cells and the upregulation of the TGF-β signaling pathway, while the immune cells were hindered in the stroma surrounding the nests of tumor cells and do not penetrate the parenchyma of the tumors [21,22]. Interestingly, the lncRNA cluster A was considered as the immune-excluded phenotype. Furthermore, the lncRNA cluster B was determined with few immune cells and the suppression of the immune response ( Figure 2F-H), in accordance with the main characteristics of the immune-desert phenotype. Therefore, the three lncRNA clusters presented a significantly distinct tumor microenvironment (TME) cell-infiltration characterization. Materials). The levels of the nine hub lncRNAs differed significantly in the three gene clusters ( Figure 3B), while the clinicopathological characteristics of those gene clusters are shown in Figure S3G (Supplementary Materials). Strikingly, there was a dramatically improved prognosis in gene cluster A than in the other clusters ( Figure 3C). Although our data revealed lncRNA-related gene modification in the prognosis, we considered constructing applied scores to predict the lncRNA modification in individuals, based on the 105 lncRNA-related DEGs. We found on the evaluation that the patients in the lncRNA cluster C ( Figure 3D) and gene cluster A ( Figure 3E) had high lncRNA scores. The process of constructing the lncRNA scores is depicted in an alluvial diagram ( Figure 3F). Furthermore, we explored an overlap analysis of the three subtypes. There were 51.8% of the patients in the high lncRNA score group overlapped with the lncRNA cluster A, and 38.4% of the samples in the low lncRNA score group overlapped with the lncRNA cluster B (Figure S4A, Supplementary Materials). Meanwhile, seventy-six percent of the cases in the high lncRNA score group overlapped with the gene cluster A, while 48.8% of the patients in the low lncRNA score group overlapped with gene cluster B ( Figure S4B, Supplementary Materials). The survival rate in the high lncRNA score group was much higher as opposed to that in the low lncRNA score group (70% vs 46%; Figure 3G), similar with the results at early-(T1-2) and advanced-(T3-4) stage lung cancer ( Figure S4C,D, Supplementary Materials). Consistent with this finding, the average lncRNA scores were significantly higher in the live cases than those in the dead cases ( Figure 3H). The results from the Kaplan-Meier analysis indicated a favorable prognosis for patients in the high lncRNA score group ( Figure 3I; P < 0.001). Moreover, it was determined that the patients with high lncRNA scores were correlated with early clinicopathological features and stages ( Figure 3J), which suggested that these patients were characterized by the lncRNA cluster C and the immune-inflamed phenotype with a survival advantage. Considering the univariable and multivariate Cox regression analyses, the lncRNA score independently served as a prognostic indicator ( Figure S4E,F, Supplementary Materials). The nomogram shows that the lncRNA score was a predicted biomarker for LUAD ( Figure S4G, Supplementary Materials).
LncRNA Score Associated with Immune Checkpoints
To examine the possible mechanisms of the lncRNA score in LUAD, immunotherapyrelated factors, including TMB and immune checkpoints, were analyzed in our study. Even though TMB did not modulate in the low and high lncRNA score groups ( Figure 4A), the lncRNA score was also positively correlated with TMB ( Figure 4B). There were no differences between the high and low TMB subgroups ( Figure 4C). However, considering the combination of the TMB and lncRNA scores, we found that both a high lncRNA score and high TMB patients exhibited a favorable prognosis, in contrast with those in the low lncRNA score group ( Figure 4D). As shown in Figure S4H (Supplementary Materials), the lncRNA score was associated with tumor-infiltrating immune cell types, including activated B cells, activated CD4 T cells and monocytes ( Figure S4H, Supplementary Materials). The difference in the TME cells between the two lncRNA score groups was also explored. It was found that the infiltration by the plasma cells, resting dendric cells, resting mast cells and regulator T cells was higher in the low lncRNA score group, while the activated mast cells, activated CD4 T cells and macrophages were highly enriched in the high lncRNA score group ( Figure 4E), indicating that the patients with the high lncRNA scores were immune activated. Our data provided the evidence that the lncRNA score was related to the immune signature, including TMB and infiltrating immune cells.
According to the Wilcoxon test, the 15 HLA family genes ( Figure 4F) and 38 immune checkpoints ( Figure 4G) varied significantly between the two lncRNA score groups. Moreover, the lncRNA score was strongly associated with 19 HLA family genes and 34 immune checkpoint expression levels ( Figure 4H). In summary, these results indicated that the lncRNA score was strongly correlated with the tumor immune checkpoints. (E) Difference in the relative abundance of immune cell infiltration in TME between the high and low lncRNA score groups. Difference > 0 indicates that the immune cells were enriched in the low lncRNA score group, and the column color represents the statistical significance of the difference.; Analyses for (F) the expression of HLA family genes and (G) immune checkpoints in the lncRNA score groups; (H) Correlation analysis for lncRNA score and the expression of HLA family genes/immune checkpoints. * p < 0.05, ** p < 0.01, *** p < 0.001; ns, not significant; TPM, transcript per million.
LncRNA Score Predicted Immunotherapeutic Benefits
We explored the predictive significance of the lncRNA scores for the responsiveness to ICB treatment in two immunotherapy groups. The patients with the high lncRNA scores exhibited a more favorable prognosis condition in contrast to those in the low lncRNA score group with anti-PD-L1 (IMvigor210, Figure 5A) and anti-PD-1 (GSE78220, Figure 5B) treatment. The patients with the high lncRNA scores had remarkable therapeutic benefits and enhanced immune responsiveness to the PD-L1 blockade ( Figure 5C,D). Furthermore, it was shown that the patients who had a combined high lncRNA score and low neoantigen load benefited significantly in terms of survival ( Figure 5E). In IMvigor210, the high lncRNA scores were significantly associated with the inflamed immune phenotype, and the checkpoint inhibitors exerted an antitumor effect in this phenotype ( Figure 5F). Therefore, the lncRNA score was shown to be significantly correlated with the tumor immune phenotypes and useful in predicting the response to anti-PD1/L1 immunotherapy.
We explored the predictive significance of the lncRNA scores for the responsiveness to ICB treatment in two immunotherapy groups. The patients with the high lncRNA scores exhibited a more favorable prognosis condition in contrast to those in the low lncRNA score group with anti-PD-L1 (IMvigor210, Figure 5A) and anti-PD-1 (GSE78220, Figure 5B) treatment. The patients with the high lncRNA scores had remarkable therapeutic benefits and enhanced immune responsiveness to the PD-L1 blockade ( Figure 5C,D). Furthermore, it was shown that the patients who had a combined high lncRNA score and low neoantigen load benefited significantly in terms of survival ( Figure 5E). In IMvigor210, the high lncRNA scores were significantly associated with the inflamed immune phenotype, and the checkpoint inhibitors exerted an antitumor effect in this phenotype ( Figure 5F). Therefore, the lncRNA score was shown to be significantly correlated with the tumor immune phenotypes and useful in predicting the response to anti-PD1/L1 immunotherapy.
Mutation Status in the High and Low LncRNA Score Groups
To further determine the lncRNA score-related mechanisms in LUAD, more of the somatic mutations and non-synonymous mutations were identified in the low lncRNA score group (Figure 6A,B). The frequently mutated genes are shown in Figure 6C,D. Notably, five genes (BRAF, DCAF4L2, CFAP47, EGFR and OR2W3) mutated more frequently in the patients with high lncRNA scores. Fifteen genes were frequently mutated in patients in the low lncRNA score group, including ITGAX, TP53, ABCB5, SMARCA4, GRM5, XIRP2, TLR4, GRIN2B, COL22A1, SYNE1, ANKRD30A, COL12A1, CENPF, PRKDC and ZDBF2 ( Figure 6E). In addition, significant co-occurrences were found among the mutations of these genes in the high ( Figure 6F) and low lncRNA score subgroups ( Figure 6G). somatic mutations and non-synonymous mutations were identified in the low lncRNA score group (Figure 6A,B). The frequently mutated genes are shown in Figure 6C,D. Notably, five genes (BRAF, DCAF4L2, CFAP47, EGFR and OR2W3) mutated more frequently in the patients with high lncRNA scores. Fifteen genes were frequently mutated in patients in the low lncRNA score group, including ITGAX, TP53, ABCB5, SMARCA4, GRM5, XIRP2, TLR4, GRIN2B, COL22A1, SYNE1, ANKRD30A, COL12A1, CENPF, PRKDC and ZDBF2 ( Figure 6E). In addition, significant co-occurrences were found among the mutations of these genes in the high ( Figure 6F) and low lncRNA score subgroups ( Figure 6G).
LncRNA Score Predicted the Sensitivity to Chemotherapeutic Drugs
To evaluate the value of the lncRNA score for predicting the response to drugs, the IC50 values of 138 drugs were calculated ( Figure 7A; Table S4, Supplementary Materials). We found that the low lncRNA score patients had a greater sensitivity to gemcitabine ( Figure 7B), docetaxel ( Figure 7C), cisplatin and paclitaxel, while those in the high lncRNA score group exhibited a greater sensitivity to erlotinib ( Figure 7D) and axitinib ( Figure 7E), suggesting that the lncRNA score was a predictive biological marker for medications against LUAD.
LncRNA Score Was Correlated with MiRNA and Post-Transcriptional Regulation
It has been found that the m6A peaks are enriched at the miRNA target sites and the m6A RNA methylation is regulated by the miRNAs, so we hypothesized that the lncRNA score strongly associates with the expression of miRNAs as potential mechanisms. In the TCGA-LUAD cohort, we identified 33 differentially expressed miRNAs between the high and low lncRNA score groups. The miRNA-targeted genes were enriched in the PI3K-Akt signaling pathway, autophagy and other pathways ( Figure 8A). Seven out of twenty-six miRNA-targeted genes in autophagy were highly expressed, while the targeted genes of the miRNAs with a lower expression in the high lncRNA score group were enriched in
LncRNA Score Was Correlated with MiRNA and Post-Transcriptional Regulation
It has been found that the m6A peaks are enriched at the miRNA target sites and the m6A RNA methylation is regulated by the miRNAs, so we hypothesized that the lncRNA score strongly associates with the expression of miRNAs as potential mechanisms. In the TCGA-LUAD cohort, we identified 33 differentially expressed miRNAs between the high and low lncRNA score groups. The miRNA-targeted genes were enriched in the PI3K-Akt signaling pathway, autophagy and other pathways ( Figure 8A). Seven out of twenty-six miRNA-targeted genes in autophagy were highly expressed, while the targeted genes of the miRNAs with a lower expression in the high lncRNA score group were enriched in the cAMP signaling pathway (11/23) and cGMP-PKG signaling pathway (11/22). Our data indicated that the lncRNA score was significantly correlated with the miRNA expression and the regulation of the signaling pathways.
To explore the association between the lncRNA score and the post-transcription characteristics, we analyzed the APA events in the TCGA-LUAD. We identified the genes with the differences in APA between the high and low lncRNA score groups and explored the prognostic values to reveal whether the length of 3 UTR affects the survival of LUAD patients ( Figure 8B). The genes with lengthening APA events were in the low lncRNA score group, corresponding to poor survival ( Figure 8C). CTNNBIP1 [23] and TUBA1A [24] were considered as proto-oncogenes in some of the cancers and the short transcript of two genes was related to the poor survival of the LUAD patients ( Figure 8D). Moreover, CTNNBIP1 was targeted directly by miR-29b on 3 UTR [25]. We held the belief that, due to the 3 UTR shortening of the genes, the miRNA might not be able to bind to the genes, relieving the inhibition to proto-oncogenes and leading to the development of LUAD. (C) The bar graphs showed the difference in the distal poly(A) site usage index (PDUI), and the forest plots showed univariate Cox regression analyses for PDUI differential genes between the high and low lncRNA score groups; (D) Kaplan-Meier curves indicated overall survival between PDUI lengthening (red) and PDUI shortening (blue) of CTNNBIP1 and TUBA1A. * p < 0.05, ** p < 0.01, *** p < 0.001.
Discussion
In the present research, we discovered 12 m6A regulators that were expressed differently between the LUAD and the adjacent normal tissues from the TCGA and GEO datasets. Considering the vital roles of the lncRNAs in the tumorigenesis and progression in cancers [26] and their mediation by the m6A regulator [27,28], we conducted a WGCNA to identify the m6A-related lncRNA module. A turquoise module was detected as a key module that is strongly related to the m6A regulators and nine hub lncRNAs were identified through a univariate Cox regression analysis. These nine lncRNAs were remarkably correlated with the OS of the LUAD patients, which aligns with the findings in other studies [29]. We then determined three distinct m6A modification-related lncRNA clusters. Three of the lncRNA clusters presented significantly different TME cell-infiltration characterization. The lncRNA cluster C correlated with immune activation and favorable prognosis, considered as an immune-inflamed phenotype. The lncRNA cluster A was characterized by the presence of abundant innate immune cells and the activation of the TGF-β signaling pathway, corresponding to an immune-excluded phenotype. The immune cells do not penetrate the parenchyma of these tumors but instead are retained in the stroma that surrounds the nests of tumor cells [30], leading to no improvement in survival. The lncRNA cluster B was immune suppressed, corresponding to an immune-desert phenotype. Hence, the TME cell-infiltrating characteristics under the three lncRNA clusters were strongly consistent with three immune phenotypes.
To explore the potential genetic changes based on the distinct lncRNA clusters, the patients were divided into three gene clusters. With consideration of the heterogeneity and complexity of the individuals, an applied and reliable scoring system, the lncRNA score, was constructed and used in the quantification of the lncRNA-associated pattern of each patient, based on the expression of DEGs. Notably, the patients with a high lncRNA score were found to have a favorable prognosis. The Cox regression analysis for both the univariate and multivariate models indicated that the lncRNA score independently acted as a prognostic indicator for the LUAD patients. Moreover, the remarkably prolonged survival of the group with a high lncRNA score and high TMB highlighted the benefit of a high lncRNA score. It is well known that TMB and the expression of immune checkpoints affect the efficacy of immunotherapy [31]. It was noted that some roles of the HLA family and vital genes, including PD-1, PD-L1, and TIM3 and B7-H4, were expressed differently in the high and low lncRNA score groups. Moreover, a remarkable correlation was found between the immune checkpoints and the lncRNA score. Thus, all of the above data revealed that the lncRNA score was involved in immunotherapy for the LUAD patients.
Immunotherapy is an emerging novel treatment for several cancers, especially lung adenocarcinoma. To validate our hypothesis that the lncRNA score is a reliable scoring system to identify the LUAD patients eligible for immunotherapy, we applied the lncRNA score in two immunotherapy cohorts. A high lncRNA score was correlated with a favorable patient prognosis in the anti-PD-L1 (IMvigor210) and anti-PD-1 (GSE78220) treatment groups. The PD-L1 blockade proved to have better therapeutic advantages and immune responses in the patients with a high lncRNA score. Furthermore, the combination of a high m6A score and a low neo-antigen burden served as a significant predictor of survival. Strikingly, higher lncRNA scores were dramatically associated with an inflamed immune phenotype, which provided the evidence that the high lncRNA score were responsible for immunotherapy. A combination of the results from the two immunotherapy cohorts highly supported the supposition that the lncRNA score is a predictor of the immunotherapeutic response in the LUAD patients.
Nevertheless, mutation is an inescapable factor of the treatment effect from immunotherapy [32]. The patients with low lncRNA scores had a worse prognosis and carried more mutations in TP53, ITGAX and ABCB5. Some of the studies have shown that the TP53 mutations often inhibited antitumor immunity and the response to cancer immunotherapy [33][34][35], which aligns with our findings. Furthermore, the PD-1 inhibitors demonstrated profound clinical advantages when used in conjunction with the co-occurring mutations [36]. Fewer co-mutations occurred in the low lncRNA score group with the poor effect of immunotherapy, consistent with our previous results. Considering the traditional first-line treatment, the lncRNA score is a useful tool to predict the effect of chemotherapy. The low lncRNA score patients were more sensitive to gemcitabine, paclitaxel, docetaxel and cisplatin, whereas those with high lncRNA scores were more sensitive to erlotinib and axitinib. Overall, the lncRNA score was shown to serve as a predictor of clinical responsiveness to immunotherapy and a meaningful tool to evaluate drug sensitivity for the LUAD patients. To explore the possible mechanism of the lncRNA score, we found the lncRNA score, associated with the expression of miRNA and miRNA, might target the 3 UTR of genes, regulating the levels of the genes and contributing to the progress of the cancers.
Conclusions
In summary, we found an abnormal expression of 23 m6A RNA regulators in the LUAD and adjacent normal tissues. Three LUAD subtypes were obtained through the consensual clustering of m6A-mediated lncRNA, and three gene clusters were classified based on the lncRNA-related DEGs. We constructed a lncRNA score model to predict the prognosis of LUAD patients, which was highly associated with immune checkpoints and mutations. Notably, the lncRNA score was an applied score system to identify LUAD eligible patients for immunotherapy and predict sensitivity to chemotherapeutic drugs.
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/cells11152399/s1, Table S1: The cut-off values used in the study; Table S2: Expression of differentially expressed lncRNAs in the patients in the turquoise module; Table S3: LncRNA-related DEGs in the patients; Table S4; Sensitivity of 138 drugs for LUAD patients. Kruskal-Wallis test was applied to analyze the p-value; Figure S1: Overview of our study; Figure S2: Nine lncRNAs were highly correlated with the survival of LUAD patients; Figure S3: Unsupervised clustering of lncRNAs and DEGs; (A) Consensus matrices of TCGA-LUAD cohort for k = 2 to 5; Figure S4: Prognostic value of lncRNA score and correlation between the clinicopathological features and lncRNA score. | 2022-08-06T15:06:22.041Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "a07d068ac0a91a80a8296b576cb5fce263a82ef8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/11/15/2399/pdf?version=1659536609",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "393420f5f1becd4de734bda3816817ff32536628",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
118425096 | pes2o/s2orc | v3-fos-license | Microwave Magneto-Chiral Effect in a Noncentro-symmetric Magnet CuB2O4
We have investigated microwave nonreciprocity in a noncentro-symmetric magnet CuB2O4. We simultaneously observed differently originated nonreciprocities; the classical magnetic dipolar effect and the magneto-chiral (MCh) effect. By rotating magnetic field in a tetragonal plane, we clearly unveil qualitative difference between them. The MCh effect signal reveals chiral transitions from one enantiomer to the other via intermediate achiral state. We show magnetoelectric effect plays an essential role for the emergence of microwave MCh effect.
In some cases, the velocity and decay rate of electromagnetic wave changes upon the reversal of the wave vector. In particular, in the microwave region, such directional anisotropy or nonreciproicity has been utilized for microwave components such as isolators and circulators. It typically originates from a magnetic dipolar interaction in a ferromagnetic medium [1]. The asymmetric geometry of a ferromagnetic component in microwave circuits gives rise to the nonreciprocal transmittance, being irrelevant to the crystal symmetry.
Recently, essentially different types of microwave nonreciprocity, which are referred to as microwave magnetoelectric (ME) effect and microwave magneto-chiral (MCh) effect, have been discerned in a chiral magnet Cu 2 OSeO 3 [2,3] and an artificial chiral magnet [4]. These effects stem from the ME effect characteristic of systems with broken time reversal symmetry (TRS) and spatial inversion symmetry (SIS).
The directional nonreciprocity of light in the medium with broken TRS and SIS has been extensively investigated in the last two decades. Rikken and co-workers first reported the directional dichroism of visible light in a chiral molecule under magnetic field [5]. Similar directional nonreciprocities originating from material symmetry breaking have been reported in x-ray [6] and terahertz regions [7]. These optical nonreciprocities can be categorized into the optical ME effect and the MCh effect. The former arises in multiferroic materials where magnetization M and electric polarization P coexist. Transmissions of lights propagating parallel and antiparallel to the quantity P × M are inequivalent in this case [8,9]. The MCh effect is induced by the interplay between crystal chirality and magnetism [5,[10][11][12]. The optical response (or precisely dielectric constant) varies as a function of k H [13]. The symbolizes chirality, and H and k indicate a magnetic field and wave vector of light, respectively. The microwave ME and MCh effects are the extension of these optical nonreciprocities. Considering the importance of nonreciprocity in microwave region, it is imperative to clarify how the ME and MCh driven nonreciprocities coexist with and is discriminated from the classical magnetic dipolar driven nonreciprocity in this region. Here, we report microwave MCh effect in a noncentro-symmetric magnet CuB 2 O 4 , which simultaneously breaks TRS and SIS. We successfully unveil qualitative difference between the nonreciprocity driven by MCh effect and that driven by classical magnetic dipolar effect with use of magnetic field rotation.
CuB 2 O 4 has been known to host multiferroic property [14,15] and to exhibit prominent nonreciprocal responses [16][17][18][19] such as giant optical ME [17,18] and optical MCh effects [19] in a near-infrared light regime. Although the magnetic resonance in microwave regime was also reported [20,21], the nonreciprocity has never been explored. The crystal structure has a tetragonal space group of 42 Id , which belongs to D 2d point group symmetry. As shown in Fig. 1(a), a tetragonal unit cell includes two inequivalent Cu 2+ sites as denoted by Cu(A) and Cu(B). It exhibits two successive magnetic phase transitions at T N = 21 K and at T * = 9 K [22]. The first transition triggers easy-plane type Néel order at Cu(A) site. The Dzyaloshinskii-Moriya interaction induces a weak ferromagnetic component normal to the tetragonal axis. The Cu(B) moment, on the other hand, remains disorder or has only a small magnetic moment less than one fourth of that at A-site [22]. The second transition at 9 K corresponds to imcommensurate helical order both at Cu(A) and Cu(B) sites.
Our microwave measurements were mostly performed at 10 K in the weak ferromagnetic (WFM) phase, in which the canted magnetic moments induce the net in-plane magnetic moment as shown in Fig. 1 (a).
When the magnetic field (H) rotates, all the magnetic moments are also rotated so that the net moment is along the magnetic field direction. An important feature in antiferromagnets belonging to D 2d point group such as CuB 2 O 4 , Ba 2 CoGe 2 O 7 [23,24] and Ca 2 CoSi 2 O 7 [25] is that the magnetic symmetry shows a large A single crystal of CuB 2 O 4 was grown by a flux method [26], and X-ray Laue photographs were used to determine the crystallographic orientation. The sample with approximate dimensions of 3 × 4 × 5 mm 3 was mounted on a coplanar waveguide (CPW) as shown in Fig. 1(b). Tetragonal axis is normal to the substrate of the CPW, and a or b axes are parallel to the microwave propagation vector k. Microwave transmission spectroscopy was carried out by using a vector network analyzer (Agilent E5071C), and the complex transmittance (i.e., S parameters) were obtained. The transmittances of microwave propagating along +k or -k were denoted as S 21 (port 1 to port 2) or S 12 (port 2 to port 1), respectively. Magnetic resonance of CuB 2 O 4 affects both amplitude and phase of the microwave, which can be detected as the change in S parameters. Using a spectrum at 1000 mT as a reference at which magnetic resonance is far above our measurement range (up to 20GHz), we obtained a relative change in amplitude as The procedure is similar to previous reports [2,3]). So as to feed intense electromagnetic fields into the sample and to achieve enough signal-to-noise ratio, we used a signal line of 0.2 mm which is relatively narrow compared with the sample width. Thus, we assumed microwave that consists of two linearly polarized light; H || y, E || z and H || z, Fig. 1(c)]. External DC magnetic fields were applied in the CPW substrate.
, the nonreciprocity is expected to be an even function of . On the other hand, the magnetic dipolar induced NDD [1], which is common to any magnets including centrosymmetric Y 3 Fe 5 O 12 , is expected to show different angle dependence. In the CPW, there is a mirror symmetry for the vertical plane along the center line of wave guide (i.e., x-z plane). This mirror symmetry ensures reciprocity at = 0 ° when the sample also has the mirror symmetry. When the magnetic field is tilted from = 0 °, some nonreciprocity may be induced, which is reversed by the reversal of tilting direction. Thus, the MCh effect induced NDD and magnetic dipolar induced one show even and odd functions of respectively. In order to distinguish them, we introduce symmetric and antisymmetric NDD spectra as follows, where S NDD () spectrum is defined as S 21 − S 12 at . The MCh effect driven NDD should appear in S sym . Figures 4(c) and 4(d) show summary of angular dependence of S sym and S asym evaluated from peak amplitude of spectra (See details of S sym and S asym spectra for Supplemental Material [27]). The S asym does not show the large difference between k || a and k || b configurations. It increases linearly with in the low region and saturates around 45°. On the other hand, the S sym shows characteristic behavior which seems to be proportional to cos2cos and has three nodes at = 45 °, 90 °, and 135 °. The signs of S sym are different between k || a and k || b configurations.
As reported previously [24,25,28,29], the NDD and NDB can emerge when the linear ME effect is From the linear magnetic field dependence with small intercept and the persistence in paramagnetic phase, we assign the origin of observed microwave response to electron paramagnetic resonance of CuB 2 O 4 .
In the weak ferromagnetic phase at 10 K, Cu(A) spins have canted antiferromagnetic order whereas Cu(B) spins remains disorder or has only small magnetic moment [1]. Although further investigation might be still needed to make a final conclusion, it can be attributed to electron paramagnetic resonance at Cu(B) sites. It should be noted that the assignment of origin of magnetic resonance does not affect our discussion about the nonreciprocity based on the magnetic symmetry. | 2019-04-13T14:59:07.639Z | 2016-05-24T00:00:00.000 | {
"year": 2016,
"sha1": "8bb029fd922ea8da189bac55d15956687a55f05b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1605.07414",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6f99050a8f059088ca5456965f194887d070c53c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
92074357 | pes2o/s2orc | v3-fos-license | Healing Cells in the Dermis and Adipose Tissues of the Adult Pig
Stage-specific antigen-4 (SSEA-4) positive cells and carcinoembryonic antigen-cell adhesion molecule-1 (CEACAM-1) positive cells, indicative of pluripotent stem cells and totipotent stem cells, respectively, have been isolated and characterized from the skeletal muscle and blood of adult animals, including humans. The current study was undertaken to determine their location in the dermis and underlying connective tissues of the adult pig. Adult pigs were euthanized following the guidelines of Fort Valley State University’s IACUC. The skin (epidermis through hypodermis) was harvested, fixed, cryosectioned, and stained with the two antibodies: SSEA-4 and CEACAM-1. SSEA-4 positive cells were located preferentially in the reticular dermis of the skin and to some extent in the underlying hypodermis. In contrast, CEA-CAM-1 positive stem cells were preferentially located within the hypodermis of the pig skin within the loose fibrous connective tissues surrounding adipose tissue. CEA-CAM-1 positive cells were also located, to a lesser extent, in the dermis as well. These results demonstrate the presence of native populations of pluripotent stem cells and totipotent stem cells within the dermis, hypodermis, and adipose tissue of adult pig skin. Studies are ongoing to address the functional significance of these cells in normal injury and repair.
Introduction
There are three basic categories of cells within animals, i.e., functional cells, maintenance cells, and healing cells. The functional cells comprise the majority of the cell types and are composed of both stroma and parenchyma. They interact on a dayto-day basis with the animal's external and internal environments [1]. out and die as well as providing trophic factors for their function and survival. A few examples of maintenance cells are adipoblasts, fibroblasts, myoblasts, mesenchymal stem cells, medicinal secreting cells, and progenitor cells [1][2][3][4][5][6]. Healing cells are normally dormant and can be found hibernating within the stromal connective tissues throughout the body [7,8]. Their function is to replace functional cells and maintenance cells lost due to trauma and/or disease. Examples of healing cells are totipotent stem cells [1,6,8,9], pluripotent stem cells [1,6,[10][11][12], ectodermal stem cells [1,6], mesodermal stem cells [1,6,13], and endodermal stem cells [1,6]. Healing cells comprise approximately 10% of all the cells of the body. They are ubiquitous, as they are found throughout all organs and tissues of the body. More specifically, totipotent stem cells comprise approximately 0.1%, pluripotent stem cells approximately 0.9%, and the ectodermal stem cells, mesodermal stem cells, and endodermal stem cells taken together, approximately 9% of all cells of the body [1,6,14].
Native healing stem cells are located in the bone marrow (20), skeletal muscle (8), and blood stream (10,21) of various animal species. The current study was therefore designed to determine the location of healing cells in the skin of the adult pig. Discovery of these healing cells could provide an important initial step toward the ultimate goal of successful and safe cellular therapy for the treatment of a wide variety of conditions involving damage to the skin.
Animal Use
The use of animals in this study complied with the guidelines of the Institutional Animal Care and Use Committee of Fort Valley State University and with the criteria of the National Research Council for the humane care of laboratory animals as outlined in the "Guide for the Care and Use of Laboratory Animals" prepared by the Institute of Laboratory Animal Resources and published by the National Institutes of Health (National Academy Press, 1996).
Tissue Harvest
Twenty adult 120 lb. female Yorkshire pigs (n=20) were anesthetized with tiletamine and zolazepam, and then prepared for surgery with a Betadine wash. Sterile drapes were placed, and one-inch wide skin slices were made on either side of a midline laparotomy incision. The slices included the epidermis, dermis, and hypodermis with embedded adipose tissue. The skin was sliced into one-inch square pieces and placed in 500-ml wide-mouth tissue culture jars (Corning, NY) containing 400-ml of cold ELICA fixative. The ELICA fixative consisted of aqueous 0.4% v/v glutaraldehyde, 2% w/v paraformaldehyde, and 1% w/v glucose, Ph 7.4, with an osmolality 1.0 [8]. The porcine skin was allowed to remain in the fixative for 1 to 24 weeks at ambient temperature. After fixation, the skin was transferred and stored in Dulbecco's Phosphate Buffered Saline (DPBS, Invitrogen, GIBCO, Grand Island, NY) at pH 7.4 and ambient temperature. Pieces of skin and associated adipose tissue were removed, placed into Tissue Tek OCT Compound 4583 (Miles Laboratory, Ames Division, Elkhart, IN) and then frozen at -20 o C. The frozen pieces of skin were cryostat sectioned at seven microns in thickness with a Tissue Tek Cryostat II (GMI, Ramsey, MN), placed on positively charged slides (Mercedes Medical, Sarasota, FL) and refrigerated at -20 o C. Immunocytochemical staining was performed following established procedures for ELICA analysis [8,33].
Immunocytochemistry
Seven-micron tissue sections were incubated with 95% ethanol to remove the OTC cryostat embedding medium and then washed under running water for five minutes. The tissue sections were incubated with 5.0% (w/v) sodium azide (Sigma, St. Louis, MO) in DPBS for 60 minutes. They were then washed in running water for five minutes, and incubated with 30% hydrogen peroxide (Sigma, St. Louis, MO) for 60 minutes to irreversibly inhibit endogenous peroxidases [34]. Tissue sections were rinsed with running water for five minutes and incubated for 60 minutes with blocking agent (Vecstatin ABC Reagent Kit, Vector Laboratories Inc., Burlingame, CA) in DPBS [33]. The blocking agent was removed and the sections rinsed with running water for five minutes. They were then incubated with primary antibody for 60 minutes. The primary antibodies consisted of 0.005% (v/v) carcinoembryonic antigen cell adhesion molecule-1 (CEA-CAM-1) in DPBS for totipotent stem cells [9]; 1 μg per ml of stage-specific embryonic antigen-4 for pluripotent stem cells (SSEA-4, Developmental Studies Hybridoma Bank, Iowa City, IA) in DPBS [6,12]; and smooth muscle alpha-actin (IA4, Developmental Studies Hybridoma Bank) in DPBS [8,20]. The primary antibody was removed. The sections were rinsed with running water for five minutes, and incubated with secondary antibody for 60 minutes. The secondary antibody consisted of 0.005% (v/v) biotinylated affinity-purified, rat adsorbed anti-mouse immunoglobulin G (H + L) (BA-2001, Vector Laboratories) in DPBS [8]. The secondary antibody was removed. The sections were rinsed with running water for five minutes, and then incubated with avidin-HRP for 60 minutes. The avidin-HRP consisted of 10 ml of 0.1% (v/v) Tween-20 (ChemPure, Curtain Matheson Scientific, Houston, TX) containing 2 drops reagent-A and 2 drops reagent-B (Peroxidase Standard PK-4000 Vectastain ABC Reagent Kit, Vector Laboratories) in DPBS [8]. The avidin-HRP was removed. The sections were rinsed with running water for five minutes, and incubated with AEC substrate (Sigma) for 60 minutes. The AEC substrate was prepared as directed by the manufacturer. The substrate solution was removed. The sections were rinsed with running water for 10 minutes and then coverslipped with Aqua-mount (Vector Laboratories) [8]. Positive and negative controls were included to assure the validity of the immunocytochemical staining [8]. The positive controls consisted of adult-derived totipotent stem cells (positive for CEA-CAM-1) [8,9], pluripotent stem cells (positive for SSEA-4) [6,12], and smooth muscle surrounding blood vessels within the tissue (positive for IA4) [8,20]. The negative controls consisted of the staining protocol with DPBS alone (no antibodies or substrate), without primary antibodies (CEA-CAM-1, SSEA-4, or IA4), without secondary antibody (biotintylated anti-mouse IgG), without avidin-HRP, and without substrate (AEC) [8].
Visual Analysis
Stained sections were visualized using a Nikon TMS phase contrast microscope with bright field microscopy at 40x, 100x, and 200x. Photographs were taken with a Nikon CoolPix 995 digital camera.
Results
Cells that exhibited positive staining for CEA-CAM-1 were located in the loose fibrous connective tissue surrounding the adipose tissue in the hypodermis of adult porcine skin ( Figure 1A), as well as in the fibrous connective tissue surrounding the blood vessels within the reticular layer of the dermis ( Figure 1B). Cells that exhibited positive staining for SSEA-4 were located preferentially within the fibrous connective tissue of the reticular layer of the dermis of adult porcine skin ( Figure 1C). Positive staining for smooth muscle alpha-actin was used as a positive procedural control. Such staining was apparent within the tunica media of a blood vessel located in the reticular layer of the dermis ( Figure 1D). Negative procedural controls demonstrated absence of staining within the dermis or hypodermis of porcine skin (data not shown). Young et al. previously reported the presence of totipotent stem cells (TSCs) and pluripotent stem cells (PSCs) in a variety of species of post-natal mammals [9,14,35]. Both stem cells have shown the capacity for extended self-renewal and the potential to form any somatic cell of the body. Stout et al. [28] demonstrated that these stem cells circulate in the peripheral blood as well as reside within the skeletal muscle of adult pigs. These cells were liberated from the skeletal muscle into the peripheral blood after only 90 minutes of trauma [28]. These findings led to the current study which identified and located CEA-CAM-1-positive totipotent stem cells and SSEA-4-positive pluripotent stem cells in adult porcine skin. Further studies are necessary to further characterize and utilize these cells in both veterinary and human therapeutic applications.
Discussion
Clonal populations of totipotent stem cells and pluripotent stem cells have been derived by repetitive serial dilution clonogenic analysis in the adult rat [9,12]. These cells can differentiate to form cells belonging to all three germ layer lineages, i.e., ectoderm, mesoderm, and endoderm [6]. For example, rat brains were lesioned with 6-hydroxydopamine as a model of Parkinson disease. After stereotactic injection of Lac-Z-transfected undifferentiated pluripotent stem cells, the genomically-labeled stem cells formed dopaminergic neurons, cortical pyramidal neurons, interneurons, glial cells, and vasculature [36]. Another example involves lesioning of a coronary artery as a model of myocardial infarction. Cultured Lac-Z-transfected undifferentiated pluripotent stem cells were injected either directly into the area of the lesion or administered systemically by intravenous injection. These genomically-labeled cells incorporated into the myocardium, demonstrating a possible role in the restoration of the histoarchitecture of the damaged myocardium [12]. These same cells have also been shown to generate three-dimensional pancreatic islet-like structures that responded to a glucose challenge by secretion of species-specific insulin [37]. Taken together, these results suggest that the regeneration and repair of one organ or tissue may be accomplished by totipotent stem cells and pluripotent stem cells residing in another organ or tissue.
Much of the research on therapeutic approaches to the treatment of inherited skin disorders, such as xeroderma pigmentosum, involves the use of gene therapy employing a viral vector [38]. Young et al. proposed that pluripotent stem cells could provide a means for delivery of gene therapy without loss of the development potential of the stem cells themselves [14]. Thus, based on the Lac-Z genomically-labeling studies [12,36], a transfected pluripotent stem cell could serve as a vector for gene therapy while simultaneously providing a source of cells with normal tissue function.
Skin ages as a consequence of decreased cutaneous elasticity and resilience [39]. The development of wrinkled skin is associated with increased susceptibility of elastic fibers in the dermal extracellular matrix to proteolytic degradation. These changes are a consequence of the inability of adult dermal fibroblasts to synthesize elastin to replace the degraded elastic fibers. Healing cells (in the form of totipotent stem cells, pluripotent stem cells, and/or mesodermal stem cells), could provide a source of functional pre-fibroblasts or cells that provide fibroblasts with the appropriate signals required for their survival. In addition to their possible uses for cosmetic purposes in the treatment of aging in skin, dermal stem cells could provide a possible treatment option for inherited diseases that impair the deposition of elastic fibers.
Ectodermal stem cells are known to exist in the bulge of the hair follicle. Such stem cells are capable of regenerating the hair follicle and epidermis in response to burns or other types of wounds [40]. Stem cells in the follicular bulge are damaged in certain types of alopecia, particularly those involving inflammation and permanent loss of follicles [41]. Healing cells, such as totipotent stem cells and pluripotent stem cells, isolated from various tissue locations such as the dermis [this study ,7], adipose tissues [this study ,22], skeletal muscle [7,8,25,28], blood [10,21,28,29], or bone marrow [11,17,19,20], may provide a source of functional stem cells for repopulating the follicular bulge to facilitate therapeutic regeneration of hair. | 2018-12-12T14:03:28.971Z | 2017-09-18T00:00:00.000 | {
"year": 2017,
"sha1": "a5797d1dc74746602a08eb92f27594883020dfba",
"oa_license": null,
"oa_url": "https://doi.org/10.33425/2639-9512.1015",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5bb7d25c562bf742eb83389f835eb3c6dc985188",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
219096179 | pes2o/s2orc | v3-fos-license | Practical Approaches to Pest Control: The Use of Natural Compounds
Food production is challenged by different factors: climate changes, market competitiveness, food safety, public demands, environmental challenges, new and invasive pests, etc. Intensive food production must be protected against pests, which is nowadays impossible with traditional techniques. The use of eco-friendly biopesticides based on essential oils (EOs), plant extracts (PE), and inert dusts appears to be a complementary or alternative methodology to the conventional chemically synthesized insecticides. The use of such biopesticides reduces the adverse pesticide effects on human health and environment. Biopesticides can exhibit toxic, repellent, and antifeeding effects. Development of bio-insecticides tackles the problem of food safety and residues in fresh food. Innovation within this approach is the combination of several types of active ingredients with complementary effects. Essential oils are well-known compounds with insecticide or repellent activities. New approaches, tools, and products for ecological pest management may substantially decrease pesticide use, especially in fruit and vegetable production. A win-win strategy is to find an appropriate nature-based compound having impact on pests, together with pesticide use, when unavoidable. Toxic or repellent activity could be used for pest control in the field conditions, as well as attractiveness of some compounds for mass trapping, before pests cause significant economic damage.
Introduction
The current agricultural production, especially food production (whole production-market chain) in the fruit and vegetable sector, is challenged by climate changes, worldwide market competitiveness, food safety, environmental and public demands, new and invasive pests and diseases, etc. New invasive and destructive pests that recently appeared, especially in fruit and vegetable production, limited the use of chemical control agents because of their high persistence in the fresh food chain. For humans, fruits and vegetables are a rich source of vitamins, minerals, fibers, acids, sugars and secondary metabolites in biologically functional forms. Generally, a higher fruit and vegetable consumption is important in improving human's health. Additionally challenged, by newer standards and climate changes, intensive food production is unthinkable without protection from pests and diseases, which is nowadays impossible using only commonly used plant protection techniques. Different approaches such as better hygiene, standards in production (e.g. GlobalG.A.P.), agro-and pomotechnical measures, prophylactic measures, beneficial insects, mechanical intervention, biocontrol products and less sensitive varieties have been developed. However, a wide use of pesticides is still necessary, but none of the pesticide control techniques, during the long-lasting history, developed against important economic pests has provided long-term protection against pest-resistant species [1,2]. Also, it may result in higher residues on food and food products than the allowed maximum residue level (MRL) when produced under good agricultural practices (GAP), legally determined by regulations (e.g. EU regulation, WTO, CEFTA, etc.). Multiple pesticide residues were found in 48% of the analyzed apples, 55% of the peaches and 56% of the cherries in 2015 [3]. Additionally, pesticides have an impact on the environment. In several European countries, groundwater pesticide concentrations exceed the European quality standards. Increasing customers and consumers and society's concern about the effects of pesticide utilization on human health and the environment have led to continuous changes in exploring techniques for pest and plant disease management. Even though significant improvements have been made, there is a need for alternative methodologies to ensure a lower utilization of pesticides that have less impact on the environment and guarantee that fruits are practically free from pesticide residues.
The use of eco-friendly biopesticides based on essential oils (EOs), plant extracts (PE) and inert dusts appears to be a complementary or alternative methodology to the chemically synthesized insecticides. Within plant protection practices, modern environmental requirements impose the need for expanding the biological control measures. Investigations of biological activity of plant derivatives lead to this goal, and some researchers have demonstrated certain promising natural substances that can be used for this purpose [4][5][6][7]. Natural semiochemicals with low toxic potential which would not cause ecosystem disturbance due to the high mortality of the target insect population could become the predominant method of pest control in the future [8], relying on naturally acquired plant defense mechanisms. Antifeedant activities of essential oils or extracts of different plant species seem to interfere with insect chemoreceptors. Plants produce alkaloids, steroids, flavonoids, terpenoids and saponins that possess high antifeedant activities against different insects; therefore, these compounds could be used in certain formulations and products that would be suitable in integrated insect management programmes. Generally EOs and their components have been considered safer than other plantderived chemicals like rotenone and pyrethrum, as well as the use of several inert dusts for pest and plant disease control [9][10][11]. Novel strategies are important and necessary, having in mind the challenges arising due to climate change (increased areas of pest species, number of generations, etc.), public demands and standards in production practice.
Defensive mechanisms of plants under insect infestations
In all natural ecosystems, plants are exposed to stressful situations caused by biotic and abiotic factors that are largely responsible for significantly reducing crop productivity. For these reasons, plants produce secondary metabolites that protect them in adverse conditions [12]. When it comes to biotic stress, there are three basic strategies that plants use to defend their enemies: [1] direct defense, [2] indirect defence and [3] tolerance [13]. These strategies are similar to those described by Berryman [14] who stated that plants either may tolerate attack or will use defence mechanisms. Which plant defence strategy will be used depends on the insect species that is causing the damage [15]. During the co-evolution of plants and insects, plants have developed certain responses to attacks of herbivores: changes in the chemical composition of their leaves, as well as their different morphological and physiological properties [16]. Considering the abiotic stresses, for example, the lack of water can significantly affect the choice of the plant defence mechanism. Lack of water in a negative sense causes physiological and morphological changes on plants [17]. The represented defence mechanisms in plants are directly related to the origin and intensity of stress, and it can be classified as indirect and direct defence mechanisms. As stress increases, the number of possible defence scenarios is decreasing.
Indirect defence mechanisms include all plant features that increase the attraction of pest natural enemies [13] or prevent pest oviposition [18,19]. In contrast, direct defence mechanisms are morphological (e.g. thorns, hairs) or chemical in nature (primary and secondary metabolites), or as their combination, the leaves of some plant species have hairs that directly adversely affect herbivores and, in addition, glands that secrete secondary metabolites [20] and often have a toxic effect (e.g. alkaloids, terpenoids, phenols) and may also inhibit digestive enzymes [21] forcing them to detoxify, causing poorer growth and development of herbivores. If the level of biotic stress is of lower intensity, tolerance is represented. Tolerance is considered when a plant may lose tissue by the herbivore while continuing its further development [22].
The defence mechanisms of direct and indirect defenses can be further divided into passive or constitutive and dynamic or induced defence described in the following paragraphs.
Constitutive defence
Constitutive defence is a passive type of defence of a plant against herbivores and other pathogens and is recognizable by the use of accumulated secondary metabolites under favorable conditions for defensive purposes, caused by the resulting stress [16,17]. It is a characteristic of perennial plants and is effective in fighting generalists such as the gypsy moth-Lymantria dispar L. (Lepidoptera:Erebidae). This type of defence is based on carbon and is present in plants growing under conditions that cause chronic excess of carbon, which provokes accumulation of carbon-based allelochemicals: lignin, tannins and other phenolic compounds, terpenes and resins. These herbal compounds that have negative effects on the growth, development or survival of another organism are considered as toxins. Plants that endure stressful situations by constitutive chemical defence must at the same time be able to sustainably synthesize and accumulate toxic substances without negative consequences on their physiology [23].
However, insects and other plant-borne pathogens have developed various mechanisms to respond to plant toxins [23] and often use them to identify plants as hosts for nutrition and oviposition [24]. Hilker and Meiners [25] consider that the presence of a particular insect species, which has developed adaptability to biochemical mechanisms to the toxic effects of plant secondary metabolites, enhances plant defence in the event of a subsequent herbivore attack. Nevertheless, constituent secondary metabolites having antifeeding action protect plants from most unadapt insects [26] and at high concentrations adversely affect specialized insects [27].
Pests -Classification, Management and Practical Approaches 4
Induced defence
Induced defence in plants is based on their secondary metabolites (terpenes, phenols) and physical structures (cell lignification) as well as a reduction in the production of essential substances to attract herbivores in response to their attack [14]. The type of plant response depends on the balance between primary and secondary metabolites [28]. If the current reserves are reduced by stressful conditions (drought, nutrient deficiency), the presence of herbivore populations is more pronounced. Increased plant resistance reduces the presence and the harmful effects of insects. The minimal length of latency for a plant depends on the rate of decline of plant resistance (e.g. time needed for the plant to recover from defoliation) [29]. The response of plants to the harmful effects of insects is measurable over time (evolutionary time), ranging from a few minutes to a longer period [28].
Additional research has been focused on increased concentrations of secondary metabolites, induced by the attack of insects or other pathogenic organisms. Terpenoids are considered to be the most abundant and diverse metabolic class of plant bioactive products (more than 40,000 structures). They have antifeedant, repellent and toxic effects and can act as regulators of insect development [30]. Bioactive natural products such as alkaloids possess well-known metabolic effects on mammals (e.g. caffeine, nicotine, morphine, strychnine and cocaine) and have probably evolved as a defence against herbivore insects [31]. It is known that the feeding of autumnal moth, Epirrita autumnata (Borkhausen) (Lepidoptera:Geometridae), with birch leaves increases the content of phenolic compounds [32]. Gypsy moth (L. dispar) feeding increases the content of tannins in oak leaves [33], while after the attack of bark beetles, terpenes and phenolics levels rise in the phloem of attacked trees [34]. Defensive proteins that act on insect digestive enzymes have also been identified in plants. For example, protease inhibitors [21] play a special protective role against insects and microorganisms, in addition to their primary role in the regulation and control of endogenous protease activity, and serve as reserve proteins [35]. The synthesis of protease inhibitors is a part of the induced defence of plants from insect attack. Thanks to the advances in genetic engineering, there is possibility to grow plants with increased levels of protease inhibitors with herbivore defence mechanisms.
The role of secondary metabolites in insect-plant interactions
Secondary metabolites are organic compounds including terpenes, phenols, alkaloids, proteins and enzymes. They are not directly involved in the development or reproduction of plants (as primary metabolites), but they are often represented in plant defence mechanisms. Usually found in only one plant species or genus, with limited distribution, their production in plants impairs plant growth and reproduction [36]. These compounds are considered as waste products of metabolism without essential function in plant survival [37].
Plants produce different chemical compounds that can be toxic or indigestive for animals [38]. Plant chemical defence is classified into two categories: 1. Quantitative defence, with massive production of indigestible substances; and 2. Qualitative defence, with limited production of toxic substances [39].
By the theory of apparency, plants with their organs are classified into apparent or unapparent [39]. The theory on the balance of growth and differentiation (plant's "dilemma" for the determination between cell growth or division and differentiation) that creates specialized organs and compounds for defence has also been proved [38].
The presence and availability of nutrients in soil significantly contribute to the level of constituents and induced allelochemicals in plants [40,41]. There are numerous examples for such actions [42]. Nitrogen fertilization affects the increase in induced poplar resistance after continuous feeding of gypsy moth caterpillars for only 72 h. The composition and concentration of secondary metabolites indicate the interspecies variation is not the case with the primary metabolites. Significant variation was observed between genotypes within the same species, different ages and different branches of one tree and between leaves of different ages on one branch.
Terpenes
Terpenes are the largest class of secondary metabolites (over 22,000 compounds described) and occur in all plants and are classified by the number of isoprene units: monoterpenoids (two units), isoprene sesquiterpenoids (three units), diterpenoids (four units) and triterpenoids (six units). Isoprene (C 5 H 8 ) is the simplest terpenoid to protect cell membranes from damage under adverse conditions (high temperatures). The primary components of essential oils are monoterpenoids and sesquiterpenoids. They are volatile, and their aromas are characteristic of certain plants. They are toxic to insects and pathogens. Monoterpenoids can be used as insecticides, for example, pyrethrins (a compound from Chrysanthemum) acts as a neurotoxin to insects. Synthetic analogues of pyrethrin are pyrethroids, a chemical group of pesticides with a large number of commercial insecticides. Alpha-and beta-pinenes are known for repellent action. They are found in pine resin and are known as potent repellents. Monoterpenoids can also be used as spices and perfumes while being relatively harmless to humans. Diterpenoids may have antifungal and antibacterial properties such as gossypol, which is a component of cotton. Triterpenoids are similar in their molecular structure to plant and animal sterols and steroid hormones, which are imitations of insect-coated hormones. For example, azadirachtin is a limonoid isolated from Indian wood (Azadirachta indica) that has antifeedant activity and causes sterility. Limonoids also include citronella essential oil isolated from Cymbopogon citratus and in the United States is popular as a mosquito repellent for its low toxicity [43]. In addition to defence against the harmful insects and microorganisms, they have a role as a signal in attracting pollinators [44].
Phenols
Phenols are also a large class of plant secondary metabolites and comprise a wide range of compounds (flavonoids, anthocyanins, phytoalexins, tannins, lignins, furanocoumarins). They have different effects on harmful organisms. Tannins have a toxic effect on insects by binding to proteins and salivary digestive enzymes, including trypsin, leading to protein inactivation. By ingesting a large amount of tannins, herbivorous insects do not gain weight and finally die. Lignins are entrenched in the cell walls of plants and provide an excellent physical barrier against pathogens. Furanocoumarins are produced by a wide variety of plants in response to pathogens and are activated by UV light; they are toxic to vertebrates and invertebrates due to integration into DNA and affect at the cellular level [43].
Alkaloids
Alkaloids are a large class of bitter-tasting nitrogen compounds and are found in many vascular plants (caffeine, cocaine, morphine, nicotine). They are derived from the amino acids aspartate, lysine, tyrosine and tryptophan. They have powerful effects on the physiological processes of animals. Caffeine is toxic to insects and fungi and also inhibits seed germination in the vicinity of other growing plants (allelopathy). Nicotine is produced at the root of the tobacco plant and is transported in leaves where it is stored in vacuoles and in the presence of herbivores is released and has toxic effects. Plants that produce cyanogenic glycosides also produce enzymes that convert these compounds into the hydrogen cyanide, including glycosides that are stored in separate cells, and toxic cyanotic hydrogen is secreted by these tissues [43].
Proteins
In contrast to the simple chemicals such as the terpenoids, alkaloids and phenols, proteins require a large expenditure of energy from plants and are formed in significant amounts after the attack of pathogens. Once activated, the defence proteins and enzymes effectively inhibit fungi, bacteria, nematodes and herbivorous insects. Defence against herbivores is obtained by forming an enzyme complex which leads to enzyme inhibition. They include defensins, amylase inhibitors, lecithins and proteinase inhibitors. Defensins have broad antimicrobial activity. First isolated from barley endosperm (Hordeum vulgare L., Poales:Poaceae) and wheat (Triticum aestivum L., Poales:Poaceae), they are widely distributed and found in most plants. They are most prevalent in seeds but can be found in almost all plant tissues. In addition to inhibiting the growth and development of many fungi and bacteria, they inhibit the digestive proteins of herbivores and impair the cellular balance of ions. Proteins are inhibitors of digestive enzymes and block the normal process of digestion and absorption of nutrients in vertebrates and invertebrates of herbivores. Alpha-amylase interferes with starch digestion, lecithin has a wide range of functions including impaired digestion in insects and blood cell disintegration in vertebrates, and ricin (toxin) produced in castor (Ricinus communis L., Malpighiales:Euphorbiaceae) is a highly potent toxin and inhibits protein synthesis. Plants in response to the attack of herbivores produce proteases that inhibit digestive enzymes including trypsin and chymotrypsin and are widespread in nature [43].
Enzymes
A special group of proteins, enzymes, are produced in plants in response to the presence of pathogenic organisms and often accumulate in extracellular spaces where they degrade the cell walls of pathogenic fungi. Chitinases are enzymes that catalyze the degradation of chitin, a cellulose-like polymer present in the cell walls of fungi. Glucanases are enzymes that degrade glyosidic bonds, a class of cellulose-like polymers present in the cell walls of many oomycetes, while lysozyme is a hydrolytic enzyme capable of degrading bacterial cell walls [43]. Chitinase and glucanase enzyme activity lyses pathogen cells [45].
The effects of secondary plant metabolites on harmful insects: state of the art
There is strong public pressure for the production of health food, i.e. food without pesticide residues. For these reasons, extensive testing is being carried out such as the use of secondary metabolites as an alternative to pesticides, the creation of Plants have created many strategies during co-evolution with insects for effective protection. The most important defence mechanism in plants is the synthesis of biologically active compounds, the so-called secondary metabolites, which can act directly as insecticides or affect indirectly the behaviour of insects-these are called allelochemicals. Allelochemicals are divided into four subgroups, allomones, keiromones, synomones and apneumones, and can be used in plant protection.
Metabolites from allomone subgroup represent a respectable group with the currently highest potential [46]. However, it is known that plant secondary metabolites (essential oils, alkaloids, saponins, glucosides, tannins, flavonoids, organic acids) are involved in the defence of harmful insects [4,6,7,47,48] leading to attempts for field application (spraying) of plant extracts. In recent decades, there are increased evidences of the diverse ecological, physiological and biochemical role of these compounds [37,49,50]. The antifeeding properties of plant sprays against harmful insects are thought to have no negative effects on predators or pollinators [51], thus providing an ideal opportunity for pest control [52]. Numerous secondary metabolites, plant extracts and essential oils have insecticidal properties [53,54]. These substances have oral, contact or inhalation toxic effect to insects, together with antifeeding and repellent effects, which cause a decrease in reproductive potential and change in normal behaviour [55]. Plants produce a wide range of chemicals in various parts above and below ground that are used to defend against stress caused by biotic and abiotic factors but also for communication with other plants and organisms. On the other side, insects have developed strategies to avoid these chemicals [56] or effective detoxification systems specific to individual insect taxa [57], which can be very different between species feeding on the same plant [27,58].
The insect's orienting abilities include receiving information about the spatial relationships of an organism, processing them and transmitting this information to effectors that can change the relationships. This can be redefined as the relationship between the input and output state of the system (insect/plant ratio); therefore, the chemosensory system allows insects to maintain a constant course, find a host or turn to a sexual partner [59]. Insects often use more than one substance to detect differences between host plants, and the use of secondary metabolites for these purposes is a consequence of evolution.
Dethier et al. [60] described the reactions of insects to chemical compounds: 1. Attractant: A chemical that causes the insect to orientate towards the source.
2. Repellent: A chemical that causes the insects to move away from the source.
3. Arrestant: A chemical that causes confusing action and slows the movement of an insect towards the source.
4. Feeding or ovipositional stimulant: A chemical that causes nutrition and egg laying (oviposition).
Deterrent: A chemical that causes an inhibition of nutrition and prevents egg laying (oviposition), and in that area the insect would otherwise feed and lay eggs.
This terminology is generally accepted in describing and considering the reaction of insects to chemical compounds that have been applied in the plant or targeted for protection against herbivores.
An essential biological characteristic of herbivores is nutrition, that is, whether they feed on a single plant species (monophagous), several plant species in one family (oligophagous) and various plant species (polyphagous) whose diet, oviposition and overall biological cycle unfold smoothly across different plant species of different families. In recent decades, extensive research has been done on the impact of secondary plant metabolites on harmful insects, regardless of which group they are classified in according to the nutrition classification.
Effects on stored product pests were widely investigated. Bioactive substances from Myristica fragrans Houtt. (Magnoliales:Myristicaceae) oil have been found to have repellent and antifeeding (contact and fumigant) activity and significantly affect offspring reduction in Sitophilus zeamais (Motschulsky) (Coleoptera: Curculionidae) and Tribolium castaneum (Herbst) (Coleoptera:Tenebrionidae) species [61]. Elettaria cardamomum L. (Zingiberales:Zingiberaceae) seed oil possesses contact and fumigant toxicity and antifeeding activity against S. zeamais and T. castaneum [62]. This essential oil causes reduction in the number of egg laying and egg hatching of T. castaneum. Extracts obtained from seeds of the Basella alba plant and leaves of Operculina turpethum and Calotropis gigantea act as inhibitors of S. zeamais development [63]. Essential oils obtained from the leaves of Eucalyptus dunnii, E. saligna, E. benthamii, E. globulus and E. viminalis (Myrtaceae) showed a pronounced insecticidal and repellent effect on S. zeamais [64,65]. Somewhat weaker but also a very toxic and repellent effect on S. zeamais and T. castaneum showed the essential oil obtained from the leaves of Cupressus sempervirens, as well as cymene, the dominant component of the essential oils of E. saligna and C. sempervirens [65]. Both cinnamon extracts (Cinnamomum zeylanicum) and essential oils of the plants Etlingera elatior, E. pyramidosphaera and Zingiber officinale show strong repellent activity towards S. zeamais, while the moderate repellent activity is shown by the extracts of Curcuma longa and Piper nigrum [66]. Essential oils of Ocimum basilicum L. and Salvia officinalis L. caused significant mortality and repellent and anti-reproductive effect [67]. Examination of five ethanol extracts of medicinal aromatic plants for bean protection from weevil Acanthoscelides obtectus Say on repellent and toxic action as well as reducing F1 offspring showed a significant insecticidal activity of concentrated extracts of Urtica dioica L. and Taraxacum officinale L, while Achillea millefolium L. extract had repellent effect and caused a decrease in F1 offspring [68]. Similar tests on A. obtectus with the essential oils of Thymus vulgaris L., Rosmarinus officinalis L. and Ocimum basilicum L. and their dominant components (thymol, alpha-pinene, 1,8-cineol and linalool) showed that T. vulgaris EO and thymol have promising efficiencies and can be used as alternatives to synthetic pesticides [69].
Colorado potato beetle (CPB) (Leptinotarsa decemlineata Say, Coleoptera: Chrysomelidae) is an oligophagous pest. The major components in the EOs of potato leaves responsible for the attractive action on potato sprouts have been identified and are referred as "volatile green leaves." Basically, they are represented by a chain of saturated and unsaturated aldehydes and alcohols, formed by the oxidative degradation of plant lipids. The relative proportions of these end products (mainly alcohols and aldehydes) vary among different plant species within the same genus, as well as seasonally within one species, due to the aging and injury of the plants, all of which affect the degree of attraction of the CPB. It is reported what are the volatile components that attract potato gold: trans-2-hexen-1-ol, hexanol-1, cis-3-hexen-1-ol, trans-2-hexenal, and linalool in the following ratios (expressed as a percentage): 100: 17: 7: 7: 4 [70]. Host attractiveness to insects related to secondary metabolites, based on the molecular interaction of CPB with plant species of the family Solanaceae, was investigated by Lawrence et al. [71].
The neem extract (i.e. azadirachtin) prepared against the third-stage larvae of L. decemlineata has significant antifeedant effect and low toxicity and can be used Practical Approaches to Pest Control: The Use of Natural Compounds DOI: http://dx.doi.org /10.5772/intechopen.91792 to control oligophagous herbivores [4]. In biological studies of residual toxicity and antifeedant action of ethanolic derivatives of sage, Salvia officinalis L. (Lamiaceae) (essential oil, five fractions of the same oil F1-F5 and camphor), low toxicity was observed on the second-stage larvae and CPB adulthood, not affecting embryonic development, and the antifeedant activity on the larvae in the first 96 h was very significant for the subsequent activity declined [5]. The possibility of disturbing the attractive properties of the potato leaf on the female potato pollen in the olfactometer was investigated by applying an ethanolic solution of sage oil and five fractions (F1-F5) of this oil. The most pronounced impediments to the recognition of potato leaf are from the sage essential oil and the least expressed by fraction one (F1) [72]. Extracts of five plant species collected in Turkey (Arctium lappa L., Bifora radians M.Bieb., Humulus lupulus L. or Xanthium strumarium L. and Verbascum songaricum (Schrenk)) were used to investigate the antifeedant effect on L. decemlineata larvae. In the first 15 min, the interaction between the larvae and the leaf mass of the potatoes was significantly affected, and during the first 24 h, nutrition was reduced. Gökçe et al. [73,74] observed that the toxic effect on CPB was obtained by the extracts of the dried rhizome of Veratrum album (CHCl 3 , acetone and NH 4 OH / benzene) and the compounds oxyresveratrol, b-sitosterol-3-O-b-D-glucopyranoside and jervine have the potential to be used as natural insecticides. Biological effects of 24 terpenes, commonly found in aromatic plants in the Mediterranean region, have been investigated to determine their antifeedant effect n and CPB as well as allelopathic impact. Terpene (−) α-bisabolol possesses high antifeeding and low phytotoxic activity [44].
Gypsy moth is a polyphagous insect and belongs to the group of the most harmful butterflies. The caterpillar feeds on the leaves of almost all types of hardwoods, conifers and the green mass of many agricultural, fruit and vegetable crops. Protection against the damaging effect of gypsy moth must involve knowledge that secondary metabolites are involved in the defence of insect plants [4,6,8,47]. Other EOs and their components have antifeeding activity against caterpillars: Kostic et al. [6] found that Ocimum basilicum EO and its dominant component linalool cause antifeedant activity against second-stage larvae, and Popovic et al. [8] found that fractions of O. basilicum EO also act as antifeedant on gypsy moth caterpillars of the second-instar (L2) as well as EOs of Athamanta haynaldii and Myristica fragrans [7]. Also, neem (0.09% azadirachtin, safer), shows good antifeedant activity against L2 and low digestive toxicity [4], which were confirmed in other investigations [6][7][8].
There are numerous positive properties that herbal extracts and EOs have compared to those of the conventional insecticides such as the absence of adverse environmental effects, the disturbance of biocenosis, the absence of nonspecific effects on predators and parasitoids, the minimal toxicity to mammals, the ease of detection and finally the inability to develop resistance. Some disadvantages must be overcome in order to make their application as efficient and easy as possible. The problems encountered in dealing with EOs are their high volatility, incoherence, inadequate formulation, limited shelf life and action on a very limited number of pests [76,77].
When insects develop resistance to certain plant secondary metabolites, they also develop resistance to the associative molecules of these metabolites generating synergistic effects. For example, in oak leaves, the tannin-binding protein forms complexes with tannins, difficult to digest. Fenny [39] concludes that tannins, as part of a wide range of defence mechanisms, have repellent, antibiotic and growthinhibiting properties, via their effect on protein availability. However, for gypsy moth, tannic acid is an attractant, and the alkaline pH value of the digestive tract prevents the formation of tannin protein complexes. Insects often use more than one substance to detect differences between host plants, and the use of secondary matter for these purposes is a consequence of evolution. In recent decades, there has been increasing evidence of the diverse ecological, physiological and biochemical role of these compounds [49,50].
Inorganic compounds
One of the alternative methods of crop protection and protection of stored agricultural products in warehouses has been the use of various inorganic dusts in recent years.
So far, diatomaceous earth (DE) preparations have been mostly registered and applied in agricultural practice. The diatomaceous earth was created by the fossilization of tiny aquatic algae (microscopic algae) by organisms called diatoms. The main constituent of their skeleton is called silica, which in contact with water and oxygen forms silicon dioxide. The compositions on the basis of DE consist mainly of an amorphous form of silicon dioxide (amorphous silicon dioxide) and a smaller part of the crystalline silicon dioxide (crystalline silicon dioxide). The first registered composition on the basis of DE was registered in 1960 in the United States for control of insects and mites. To date, over 150 preparations for various uses have been registered. They are used to counter bedbugs, cockroaches, crickets, fleas, ticks, spiders and many other pests. They have also found application in the protection of stored products, except in conventional agricultural production and in IPM and organic production [78].
In addition to DE, many other inorganic powders such as silicophosphate, rock phosphate, sand, kaolinite, clay, zinc oxide, titanium dioxide, vermiculite dust, zeolite, alumina, etc. have also been studied [9,10,[79][80][81][82]. In addition to natural dusts, the possibility of obtaining and applying nano-dusts has been increasingly studied in recent years. The application of modern nano-methods yields nanopowders of improved properties (Figure 1) and efficiency and is more environmentally friendly (less toxic to mammals and plants, durability, eco-friendly, less harmful to the environment than the conventional) [9,79,80,83,84].
The mechanism of action of native and nano-dusts is not fully understood. Some authors believe that the particles of these preparations bind to the exoskeleton and that they adsorb lipids from the cuticle and cause dehydration of the insect [80]. In contrast, other authors believe that dust particles can physically damage the cuticle and lead to dehydration, that they can ingest damage to the intestinal tract of insects, and that they can block the trachea and thus the insect's breathing [85,86], like abrasion of the cuticle, absorption of the cuticular waxes from the epicuticle surface, damaging of the digestive tract, blocking of spiracles and tracheae, surface enlargement combined with dehydration and repellence caused by the physical presence of the dust. It is assumed that such chemically inert compounds attached to an exoskeleton are able to adsorb cuticular lipids, thus causing rapid dehydration of insects.
Mineral elements (macronutrients and trace elements) play an important nutritional role in plants and are necessary for the normal course of many cellular processes such as primary and secondary metabolism, defence, gene regulation, hormone perception, energy metabolism, reproduction and signal transduction [87]. A series of functions performed in plants can be affected by the increase in their resistance and protection against harmful organisms. According to Reynolds et al. [88], silicon (Si), which has been found to play a significant role in overcoming the various biotic and abiotic stress factors to plants, may have an indirect and direct effect on enhancing the defence capabilities of plants against harmful insects as part of the mechanisms of physical and induced chemical defence of plants. The physical defence mechanisms involving Si are mainly related to the deposition of Si, mainly in the form of opalina phytoliths, in the cell walls, especially in the epidermal cells of the plants, thereby increasing their firmness and abrasiveness, which in insects can lead to difficult nutrition and damage to the oral apparatus. Also, such a plant food for the insects is reduced digestibility which negatively affects the parameter growth and feeding insects and which is reflected in their reduced growth, length of life and fertility. The presence of Si in the plant may also initiate or accelerate a number of different chemical defence mechanisms that protect the plant from harmful insects. Si can cause a significant increase in defence enzymes such as peroxidase, phenylalanine ammonia-lyase (PAL) and polyphenol oxidase involved in the processes of lignification and synthesis of suberin (peroxidase), increased production of phenolic compounds (PAL) and oxidation of phenolic compounds (polyolase) which increases the hardness of plant tissue and the production of compounds that have detergent and toxic properties while reducing the nutritional quality of food and the digestion of proteins. Also, silicon exerts a positive effect on the biosynthesis of volatile compounds such as jasmonic acid and salicylic acid, in which herbivore-invaded plants emit to attract the natural enemies (predators and parasitoids) of the insects that attack them. Silicon definitely may be considered as an environmentally friendly option in the concept of sustainable agriculture.
Conclusion
Intensive food production must be protected against pests and diseases, which is nowadays impossible with single and traditional techniques. However, a wide use of pesticides is still necessary, which may result in higher residues on food and food products than the allowed maximum residue level (MRL). The use of eco-friendly biopesticides based on essential oils (EOs), plant extracts (PE) and inert dusts appears to be a complementary or alternative method to chemically synthesized insecticides. The use of biopesticides may reduce the adverse effects of chemical pesticides on human health and environment. Biopesticides can exhibit toxic, repellent and antifeedant effects on different insect species. Investigations for developing a new bio-insecticide tackle the problem of food safety and residues in fresh food. Innovation within this approach is the combination of several types *Address all correspondence to: sstankovic@ipn.bg.ac.rs of active ingredients with complementary effects. Essential oils are well-known for their insecticide or repellent activity. But so far their use in practice is limited due to their high volatility and short period of action. This problem could be solved by their encapsulation with natural coating materials. Regarding such formulation, their volatility should be prolonged, and EOs will have a chance to provide satisfactory efficacy against pests. New approaches, tools and products for ecologically improved pest management may substantially decrease pesticide use against pests, especially in the fruit and vegetable sector. A win-win strategy is to find an appropriate nature-based compound which will have a wide spectrum of impacts on pest populations. Toxic or repellent activity could be used to control their presence in the field conditions, combined with the use of attractants of some compounds for pest mass trapping, followed by pesticide use when unavoidable.
© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 2020-05-07T09:04:34.680Z | 2020-04-22T00:00:00.000 | {
"year": 2020,
"sha1": "201c9a360519a08e95f84b08f5ec8d25e38b8aeb",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/71770",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b292945be6cd9ac99f73ed11953a6408365c0069",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
257365364 | pes2o/s2orc | v3-fos-license | Demystifying What Code Summarization Models Learned
Study patterns that models have learned has long been a focus of pattern recognition research. Explaining what patterns are discovered from training data, and how patterns are generalized to unseen data are instrumental to understanding and advancing the pattern recognition methods. Unfortunately, the vast majority of the application domains deal with continuous data (i.e. statistical in nature) out of which extracted patterns can not be formally defined. For example, in image classification, there does not exist a principle definition for a label of cat or dog. Even in natural language, the meaning of a word can vary with the context it is surrounded by. Unlike the aforementioned data format, programs are a unique data structure with a well-defined syntax and semantics, which creates a golden opportunity to formalize what models have learned from source code. This paper presents the first formal definition of patterns discovered by code summarization models (i.e. models that predict the name of a method given its body), and gives a sound algorithm to infer a context-free grammar (CFG) that formally describes the learned patterns. We realize our approach in PATIC which produces CFGs for summarizing the patterns discovered by code summarization models. In particular, we pick two prominent instances, code2vec and code2seq, to evaluate PATIC. PATIC shows that the patterns extracted by each model are heavily restricted to local, and syntactic code structures with little to none semantic implication. Based on these findings, we present two example uses of the formal definition of patterns: a new method for evaluating the robustness and a new technique for improving the accuracy of code summarization models. Our work opens up this exciting, new direction of studying what models have learned from source code.
INTRODUCTION
Riding on the major breakthroughs in deep learning together with the ever-increasing public datasets and computation power, machine learning models have enabled state-of-the-art solutions to a wide range of problems including image classification [Krizhevsky et al. 2012;Touvron et al. 2019], machine translation [Conneau and Lample 2019;Devlin et al. 2019], and game playing [Silver et al. 2016[Silver et al. , 2017Vinyals et al. 2019].
The success of the learning-based approaches can be largely attributed to their capability of discovering patterns exhibited in a large amount of data. An influential subfield within machine learning has been dedicated to explaining, visualizing patterns that models have learned from data. Moreover, the field has been receiving growing attention for its leading role in tackling some of the most imminent challenges in Artificial Intelligence (AI). For example, explainability is likely to be a central goal of the next-generation AI technology. Revealing what models have learned is a crucial first step to designing such explainable AI systems. In addition, from a scientific standpoint, dissecting the internal operation and behavior of complex models is necessary. Because without a clear understanding of how and why machine learning models work, the development of better models is reduced to trial-and-error.
For a few notable efforts, Chen et al. [2006] present a context-sensitive grammar to model the wide variations in object configurations via composite graphical templates. A strength of their approach is the ability to explain what patterns are recognized from test data during the inference time, in the case of cloth modeling discussed in [Chen et al. 2006], predictions are made using templates representing shoes, hands, faces, etc. Zeiler and Fergus [2014] introduce a visualization technique that gives insights into the function of individual feature layers and the end-to-end operation of a convolutional network, one class of Deep Neural Network (DNN) commonly applied in image classification. As a diagnostic tool, the visualization technique allows them to find model architectures that outperform AlexNet [Krizhevsky et al. 2012], the then state-of-the-art model on ImageNet [Deng et al. 2009]. We defer a detailed survey of related work to Section 6.
Despite the significant stride, formalizing patterns that models have learned remains to be an exceedingly challenging task. This is in large part due to the nature of the problem domain to which learning-based approaches are applied. Models almost exclusively deal with continuous data out of which learning formal patterns is difficult if not impossible. For example, no machine learning models known to this day set out to learn principle definitions for labels in ImageNet (e.g. panda, ostrich, goldfish, etc.). Even in the area of natural language processing, learning formal patterns is a tricky task since the meaning of words can be ambiguous.
When programming seems to have become yet another popular domain for machine learning models (exemplified by the DNNs), it is vitally important to recognize that program is a fundamentally different data structure. Specifically, it is discrete in nature with a well-defined syntax and semantics. Syntactically, programs are written in a way that satisfies the recursive rules (i.e. production rules) defined in a context-free grammar. Semantically, the behavior of a program satisfies the inference rules defined in the small-step semantics [Plotkin and Kahn 1987]. All of the above lead to the insight of our work, that is, patterns learned by models from source code can be formalized. However, we face an important challenge: how to efficiently navigate through an enormous search space of diverse program properties ranging from syntax to semantics?
Our solution is based on a key and rather unexpected observation we made about the behavior of many prominent models of code. Built on the work of Wang and Christodorescu [2019], which finds syntactically trivial and semantically preserving code edits frequently cause models to alter their predictions, we observe an even more surprising phenomenon. That is when models are given a program to predict, almost always the program can be reduced to very few statements (i.e. ≤ 2) for which models make the same prediction as they do for the original program. This is a significant finding in two ways. First, (1) it indicates a small, local window of code sufficiently covers the patterns that models look for to predict the properties of the entire program. Therefore, the space for searching the pattern definitions is orders of magnitude smaller than one would have anticipated. Second, (2) for such simple patterns which are often semantically meaningless, predicates of semantic properties can be safely ignored, which further restricts the search space to predicates of syntactic properties. Based on (1) and (2), a natural idea for defining a pattern emerges: synthesizing rules based on syntactic properties of the key statements in the original program. However, there is a caveat: can the key statements alone always preserve the label models predicted for the original program regardless of the surrounding context? To address this issue, we find the set of valid programs in which the key statements do preserve the original predicted label from which we define the pattern that models learned.
At the technical level, we propose "Abstract, Mutate, Concertize, and Summarize", a novel method for pattern formalization. First, given a set of programs P all with a label predicted by a model , we abstract away the statements on each program in P that do not cause to alter its prediction. We call each remaining code snippet a seed, which captures the essence of the prediction made by for the label . Second, since machine learning models have been known for their generalization capability, we conjecture programs that resemble a seed are likely to be predicted with the same label . Therefore, we mutate each seed to obtain additional code snippets, which we call mutants. Third, we synthesize full-fledged programs by inserting statements into each seed and mutant. In particular, we enumerate a diverse set of statements and expressions using the grammar of the language P are written in. Later, we pass each synthesized program to the model to get a label. Finally, we infer a context-free grammar that describes all synthesized programs for which predicts the label as the principle definition of a pattern learned by the model w.r.t. the label . We develop a tool, PATIC, as an implementation of the method "Abstract, Mutate, Concertize, and Summarize", which automatically formalizes the patterns learned by two code summarization models: code2vec [Alon et al. 2019b] and code2seq [Alon et al. 2019a]. Code summarization refers to a task in which models aim to infer the name of a method given its body. Figure 1 shows an example. The correct prediction for this method is reverseArray. We target code summarization models due to the tremendous impact they have made to the programming language community. Since the publication of code2vec in Principles of Programming Languages (POPL) two years ago, it has not only gathered many citations (i.e. 130+) but also led to several interesting follow-up works (e.g. code2seq, sequence GNN [Fernandes et al. 2019], and LiGER [Wang and Su 2020] [Alon et al. 2019b] whose name is stripped out for models to predict. The top-4 prediction made by code2vec is shown on the right.
Through the context-free grammars that PATIC inferred, we find that the patterns learned by both evaluated models are simple w.r.t. all labels included in all Java-small, Java-med, and Java-large [Alon et al. 2019a], three public, large-scale, cross-projects datasets used in many code summarization works. The average number of tokens presented in a seed is computed to be 15.59. For almost all the methods, the vast majority of the statements in their body can be removed, and the resulted seeds consist of less than two statements. In addition, there is little constraint on the synthesis of full-fledge programs given the presence of seed statements, many constructs we insert into seeds do not even remotely resemble the semantics of the non-seed statements. Nevertheless, models display a strong tendency to keep the predictions they made for the original programs. Our finding indicates that neither code2vec nor code2seq tries to learn a global, semantic representation for a given method, instead, they use local, syntactic program features as a proxy to simplify their memorization of the method.
Based on our findings in what code summarization models have learned, we present two example uses of our context-free grammar-based pattern formalization. First, we propose a new method to evaluate the robustness of code summarization models. In particular, we construct attacks to expose their vulnerabilities to small perturbations to the input programs. Our intuition is to concentrate changes on the seed statements, the part of a method on which models predominately based their predictions, to sway the predictions models made for the original programs. In addition, we introduce four semantically-preserving program transformations which enable us to find very small perturbations (i.e. ≤ 2 tree-edit distance between the ASTs of perturbed and unperturbed programs) for every correctly-predicted test method in Java-small, Java-med and Java-large. Second, we propose a new technique to improve the generalizability of code summarization models. In the spirit of adversarial training [Goodfellow et al. 2015], we opt to include programs synthesized to address a particular weakness of code summarization models to re-train the models. Technically, after collecting the training programs for which models made the incorrect predictions, we inject their seeds to other programs to synthesize additional training data. Specifically, by assigning the label of the hosting program to the synthesized program, we guide models to shift their attention to different syntactic structures than they previously attended to, which paves the way for them to connect to the ground truth. After undergoing such a re-training process, both code2vec and code2seq have become more accurate, especially code2seq which achieves the state-of-the-art results on Java-med, and Java-large.
This paper makes the following contributions: • A formal definition of patterns learned by code summarization models.
• A sound algorithm for formalizing patterns that code summarization models learned. Assuming the monotonicity property (cf. Definition 3.3), the algorithm is also complete. • An implementation, PATIC, which automatically generates definitions of patterns discovered by code summarization models in the form of context-free grammars. • An empirical evaluation of PATIC on two prominent code summarization models: code2vec, and code2seq. Through the context-free grammars that PATIC inferred, we find the patterns neither model learned precisely capture the properties of programs with the predicted label. • A new method for evaluating the robustness of a model, which finds adversarial examples with smaller perturbations and within far few attempts than prior approaches. • A new technique for improving the accuracy of code summarization models, which enables code2seq to achieve the state-of-the-art results on Java-med, and Java-large.
OVERVIEW
In this section, we present an overview of our approach to formalizing patterns learned by code summarization models.
An Illustrative Example
First, we introduce a code summarization model and two input methods as our running example for illustrating the key idea and high-level steps of our approach. Model. We use code2seq, the state-of-the-art code summarization model on Java-med and Java-large. Like many other DNN-based models, code2seq strives for learning precise vectorial representations for source code. Such vectors, commonly known as program embeddings, capture the semantics of a program through their numerical components such that programs denoting similar semantics will be located in close proximity to one another in the vector space. At high level, code2seq adopts a generative approach for method name prediction. It employs a standard encoder-decoder architecture [Cho et al. 2014;Devlin et al. 2014] in which the encoder first embeds the ASTs of input methods into vectors, then the decoder uses the vectors to generate method names as sequences of words (e.g. reverse, and array as a prediction in Figure 1). Input Methods. Figure 2a and 2b depict two Java methods with the name saveBitmapToFile, which are extracted from the training set of Java-large. The distinguishing feature of this label is the compress API under the Bitmap class, which both methods have highlighted in the shadow box. Worth noting that neither code2vec nor code2seq requires input programs to compile so long as they satisfy the syntactic grammar of the language they are written in. Also, both code2vec and code2seq only take individual methods as input. When another method is invoked in the body of the input method, no inter-procedure analysis is performed, neither is inlining. Given the methods in Figure 2a and 2b, code2seq gives the correct predictions for both of them. Hereinafter, when referring to the inputs of code2vec or code2seq, we use programs and methods interchangeably.
Overview of "Abstract, Mutate, Concretize, and Summarize"
We now give an overview of our technique which consists of four major steps: abstract, mutate, concretize, and summarize.
2.2.1 Abstract. While DNN-based models have been gaining increasing popularity in the programming domain, Wang and Christodorescu [2019] cautioned they are notably unstable with their predictions. Simple, natural, semantically-preserving transformations frequently cause models to change their predictions. Figure 3 depicts an example in which the original method (3a) is correctly predicted to be factorial by code2vec, and the transformed method (3b), albeit semantically equivalent, is totally mishandled. None of the top-5 predictions even remotely resembles the ground truth considering that we only swapped the operands of the multiplication. Note that the probability of the top-1 prediction is even higher on the transformed method.
Their finding suggests that models don't evenly distribute their attention across the entire structure of the method, instead, they focus on a small, local window of code for making predictions. To validate this hypothesis, we aim to find the window of code which models predominately based their predictions on. A simple idea is to exhaust all subsets of the statements in a given method to find the minimal subset for which models make the same prediction as they do for the original program. However, a challenge arises: since the number of subsets to be traversed grows exponentially with the size of the method, how does it effectively scale to methods which consist of a large number of statements. We defer a detailed discussion on how to overcome this challenge in Section 3.2.1. Figure 4 depicts the minimal programs we discovered for the methods in Figure 2a and 2b. Three points we intend to emphasize. First, (1) no statement in either seed reflects the name of the methods, bos.close(), albeit indicating a high probability of file operations, do not represent "save to file", neither does it connect to "bitmap". The rest are the log/display APIs which are completely irrelevant. The reason that code2seq still predicts the label saveBitmapToFile for both methods is the parameters provided in the method headers since changing the class name Bitmap to Image leads code2seq to predict a different label for both methods. Second, (2) the distinguishing features (i.e. compress API under the Bitmap class) are absent in both seeds, casting serious doubts on code2seq about what it has learned. Finally, (3) Despite their irrelevance, all statements in both seeds are necessary in keeping code2seq's prediction. This suggests that code2seq takes into account the syntactic structure of methods -two consecutive method invocations in the case of Figure 4a -to make prediction.
Mutate.
It is well-known that machine learning models don't just create a mapping from input data to predicted labels through rote memorization but rather they discover patterns from data which generalize to that of similar characteristics. Based on this knowledge, we intend to find additional programs that are similar to the seeds which otherwise can't be obtained through a pure program abstraction approach. When mutating a seed, we modify its AST with the standard tree-edit operations (i.e. node insertion, deletion and renaming), and ensure the resulted mutants also abide by the syntactic grammar of the language. Figure 5 shows two mutants among many we have discovered which also preserve the predictions code2seq made for the original programs. We convey two takeaways. First, the value of individual tokens has a heavy influence on code2seq, in both mutants, the two string constants have to be present for code2seq to predict saveBitmapToFile. Second, as explained above, code2seq cares about the syntactic structure that methods exhibit. For example, in Figure 5b, even with the presence of "save to file succeeded", the major part of the pattern code2seq looks for, it has to be passed as a parameter into a method call. Any other operation on the string (e.g. String _var_= "save to file succeeded") will lead code2seq to predict a different label.
Concretize.
Directly using seeds and mutants as the definition of the pattern that code2seq learned for the label saveBitmapToFile is a faulty approach despite their dominance in the predictions that code2seq makes. Because no formal guarantees can be given that warrant the predicted label saveBitmapToFile when the seed or mutant statements are surrounded by any context of code. Therefore, we first explore the space of all full-fledged programs that are valid concretizations based on seeds and mutants, then use valid concretizations to define the pattern that code2seq learned for the label saveBitmapToFile. We call a concretized program valid when it preserves the predictions models made for the original program.
To synthesize a full-fledged program, one can simply enumerate all possible statements to insert into a seed or a mutant. However, such an approach is likely to be infeasible. Because the space of programs can be enumerated by a grammar of any programming language is infinite. We show how to overcome this feasibility challenge while looking for valid concretizations in Section 3.2.3. Figure 6 depicts two valid concretizations based on the mutants (Figure 5a and 5b) whose statements are highlighted in the shadow box. Apparently, both programs are written in drastically different syntax and even control flow constructs, and they do not denote the semantics of the original methods in any way, shape or form. code2seq keeps its original predictions purely because of the presence of mutant statements. The examples are convincing evidence that code2seq does not learn to represent the semantics of a method, instead, it attends to local, small syntactic features to memorize a method name. 2.2.4 Summarize. Finally, to define the pattern code2seq learned for the label saveBitmapToFile, we infer a context-free grammar to describe all valid concretizations produced in the previous step. This in fact is a grammar inference problem [Biermann and Feldman 1972;Stevenson and Cordy 2014] where much of the success is still limited to inferring regular grammars [Oncina and Garcia 1992]. Fortunately, our problem setting is considerably simpler. That is the program to be dealt with already abide by the context-free grammar of the language they are written in, in other words, we don't need to generate production rules from the scratch but to recycle those a compiler would have used for parsing every concretized method. Specifically, we take the union of the grammars that describe each concretized method as the definition of the pattern code2seq learned for the label saveBitmapToFile. For the sake of clarity and simplicity, Figure 7 depicts the context-free grammar that is inferred from only the concretizations of the two mutants in Figure 5. We use Backus-Naur form style of notation and add an extra quantifier, "?", which denotes zero or one occurrence of the quantified terminal or non-terminal. The key of the grammar is the production rule that describes how the non-terminal <seed block statements> can be replaced. Below, the replacement of non-terminal nodes <seed1 core> and <seed2 core> point to the specific concretization w.r.t. the two seeds. In particular, each production rule describes how the block statements, defined in the later rules, can be inserted into each seed without altering the code2seq's prediction. For the non-terminal nodes whose replacement is not defined in Figure 7, we reuse their production rules defined in the syntactic grammar of Java, the default input programming language of code2seq.
METHODOLOGY
In this section, we give a detailed presentation of our approach to formalizing patterns learned by code summarization models. In particular, we describe our method "Abstract, Mutate, Concretize, and Summarize".
Problem Definition
Given a model , a set of programs P (from the training set of ) for which predicts the label . In this work, we aim to formalize the patterns learned from . In other words, our formalization should define the common properties of P which regards as a "trademark" for any program it predicts the label . We emphasize two points: (1) P is extracted from 's training set, the only part of a dataset from which a model learns. During inference, models only attempt to match the learned patterns in test data, therefore we don't consider the non-training programs for studying what models have learned.
(2) the label that predicts can be incorrect for some (or even all) programs in , regardless, has learned a pattern that can be formalized. In fact, we propose an example use of the pattern definitions based on the incorrect predictions made to improve its accuracy.
At a high-level, the way we define a pattern that code summarization models learned is to generalize about all programs that exhibit the pattern using context-free grammar (Definition 3.1). For the remainder of this section, we illustrate how to infer such a context-free grammar given a model and its training set. To assist our exposition, we use the notations introduced above throughout this section.
Definition 3.1. (Patterns) Let be a model trained on a dataset D. Let P (s.t. P ⊊ D) be a set of programs for which predicts the label . The patterns learned from P for predicting the label is a context-free grammar =< , Σ, , > with non-terminals , terminals Σ, a start symbol , and production rules , which specifies the common properties of the programs for which predicts the label .
The Abstract, Mutate, Concretize, and Summarize Algorithm
This section presents our pattern formalization algorithm. In particular, it describes the four key functional components: abstract, mutate, concretize and summarize.
3.2.1 Abstract. The goal of this step is to identify a fragment of each method in P that captures the essence of the prediction a model made for . We name such fragments seeds (Definition 3.2) which satisfy the sufficient and necessary properties.
Definition 3.2. (Seed) Given a training method whose body consists of a set of statements , and a model which predicts the label for , another method 1 with a body of statementsŝ .t.ˆ⊆ is said to be a seed of iff it is sufficient, meaning, also predicts the label for , and necessary, meaning, there does not exist a method ′ with a body ′ s.t. ′ ⊊ˆ; and also predicts the label for ′ .
The intuition behind the sufficient property is to ensure that statements in a seed alone lead models to the same prediction they made for the original method. As for the necessary property, our definition implies no subset of the seed statements possesses the same capability. In other words, removing any statement in a seed will cause models to alter their predictions.
Definition 3.2 does not guarantee the singularity of seeds within a method. When there happen to be multiple sets of statements in the body that satisfy the sufficient and necessary properties, a method will have multiple seeds. In fact, for the program in Figure 2a Note that the multiplicity of seeds does not necessarily violate our assertion that models base their predictions on a small, local window of code. A common behavior models display is they feed off the most recognizable statement while receiving enough signals from the remaining statements to arrive at the original prediction. Seeds in Figure 4a and 8a are good examples of this behavior. code2seq treats Log.e(TAG, "failed to save frame", e) as the dominant statement for feature representation, however, it still need help from other statements (e.g. bos.close() in Figure 4a and the instantiation of a stream class within a try catch clause in Figure 8a) to predict the label saveBitmapToFile. In other words, we consider the window models look into as centering around the dominate statement, and extending across a variety of supporting statements. Another case we have found where models look into separate places in a method is when multiple statements render similar program features such as the two seeds in Figure 4b and 8b which only differ by a word in a constant string. Therefore, from the perspective of feature representation, models can be deemed as attending to the same window of code within the program.
To identify seeds in a given method, we can adopt a brute-force approach to exhaust all subsets of the statements in the body. The algorithm runs in exponential time, and will incur 2 −2 (where is the number of statements in the method, and we don't need to consider an empty seed or itself, hence "−2") predictions. Although the approach can cope with methods of smaller size, it is hard to scale when the number of statements in the method increases. To address the potential scalability concern, we present an optimization of the brute-force approach, which runs in quadratic time in average case. The optimization is designed based on the monotonicity property (Definition 3.3) of models. Before we give the formal definition of monotonicity, we explain our intuition at a high level. Since the weights of models are optimized to fit the training data, they behave differently on the unseen data, the degree to which depends on the distributions from which both datasets are drawn and the capacity of the models themselves. As a concrete piece of evidence, models are always shown to be less accurate on the test set than they are on the training set of any widely-acknowledged, well-established benchmark no matter how powerful the models are or how similar the training data is to the test data. Since proving the property is out of the scope of this paper, we assume the monotonicity of models to which we have not found a violation through our large-scale experimentation.
Definition 3.3. (Monotonicity) Given a model , a method 0 from the training set of , a seed of 0 , a method outside of the training set of , and the shortest sequence of tree-edit operations 1 , 2 , . . . , that transforms the AST of 0 into that of , is said to be monotonic iff it will make the same prediction for (the result of the application of on −1 where > ≥ 1) as it makes for 0 if it makes the same prediction for (where ≥ > ) as it makes for 0 . Similarly, will make a different prediction for (where ≥ > 1) than it makes for 0 if it makes a different prediction for (where > ≥ 1) than it makes for 0 .
Algorithm 1 shows how to find seeds in a given method. The key is to avoid searching in a set of statements where there does not exist any seed. Whenever the selected_statements is found to return False satisfy the sufficient property, we save it as a seed (Line 8), and validate the absence of seeds in the remaining statements in the method (Line 10). Thanks to the monotonicity property, the absence of seeds can be validated in constant time. That is, if the entire set of the remaining statements is not capable of leading models to the predictions they make for the original methods, none of its subsets will, hence need not be checked, in which case, the algorithm returns and its output can be accessed through seeds. However, if the remaining statements present more seeds, we break out of the current loop (Line 13), and continue to search the additional seeds by recursively calling itself with a new method composed of the remaining statements (Line 17-18). To avoid re-attempting the same statements, we save the previous attempts into traversed_statements (Line 15) which will be skipped in our quest for additional seeds in the future (Line 5). We do not explicitly check the necessary property because all discovered seeds are by construction minimal. In terms of the running time, Algorithm 1 is guaranteed to find all seeds by traversing no greater than =1 sets of statements where is the number of statements in a given method, is the number of statements in the largest seed of the method, and denotes the number of combinations for objects selected out of . Since the vast majority of seeds consist of no greater than two statements, Algorithm 1 runs in quadratic time in average case.
3.2.2 Mutate. In this step, we set out to find similar programs to seeds which also exhibit the pattern that looks for when predicting the label . In particular, we mutate the AST of each seed to produce new programs through the standard tree-edit operations: adding nodes, removing nodes, and renaming nodes. Algorithm 2 gives the details in computing the set of all mutants according to Definition 3.4. We defer the discussion on the motivation of Definition 3.4 to Section 3.2.3.
Definition 3.4. (Mutant) Given a training method (with a body ), its seed (with a bodyˆ), and a model , a method whose body¯is a variant ofˆ(obtained by modifying the AST of one or more statements inˆwith the tree-edit operations), is said to be a mutant of iff makes the same prediction for and ′ such that ′ 's body is \ˆ∪¯. is said to be the weakest mutant iff any change on that causes itself to be even further away from will lead to predict a different label than predicts for . In contrast, is said to be the strongest mutant iff it alone keeps the predictions that models made for (exemplified by the programs in Figure 5).
Algorithm 2 adopts an iterative approach to find mutants through a set of seeds. In each iteration, we enumerate the minimal edits to transform a mutant discovered in the previous iteration (Line 5). By minimality, we mean applying smaller edits to the corresponding mutant results in a syntactically invalid program (e.g. the minimal tree-edit operations needed to transform the AST of a+b into that of a[b] is two). For the very first iteration, we transform the seeds given as input. Note that we only consider modifications when enumerating the edits for each mutant because deletions are guaranteed to yield invalid mutants according to the definition of seed; insertions are by nature not related to seed modification, and will be handled in the next step. We then check the validity of each transformed program according to the definition of mutant (Line 7-10), if the transformed program passes the validity check, we save it as a mutant, and prepare it for further modification in the later iterations (Line 11-12). On the other hand, if the transformed program failed the check, it will be permanently discarded. Because any further modification of the program will not lead to valid mutants according to the monotonicity property. The iteration stops when none of the transformed programs passes the validity check, in which case we have found all mutants. Changing Identifiers and Constants. Regarding the renaming operations for terminal nodes of identifiers (e.g. variables, types, or methods names), they can be changed to any value that respects the lexical grammar (e.g. keywords that are reserved for the language can not be used to name variables). To fully explore the search space, we consider both swapping identifiers within a given method and changing them to words that do not appear in the given method. As for constants, we apply the same trick without incurring type errors (e.g. int a = '3').
3.2.3 Concretize. Even though the seeds and mutants are mostly responsible for the predictions models make, it is impromptu to take their properties as the definition of a pattern that models have learned. Because statements in the seeds (or mutants) do not guarantee to preserve the predictions models made for the original programs, considering that their surrounding context can be composed FindMutants(method, model, label, latest_mutants, mutants) of arbitrary code. Therefore, the goal of this step is to find the space of programs anchored by the seed (or mutant) statements that keeps the predictions that models made for the original program. Concretize with Seeds. As discussed in Section 3.2.1, due to the nature of the mechanism in which models get trained, the discrepancy in their performance on the training and test data is inevitable. The closer an unseen sample is to a training example, the higher probabilities it will get the same predicted label as the training example. Based on this property, we introduce our technique to concretize a seed below.
We simply restore the removed statements in the original method, and incrementally edit the restored statements (while leaving the seed statements intact) until the resultant method of the edits no longer keeps the prediction that models made for the seed. Our intuition is to quantify the space of all possible concretizations using a set of closed intervals, each of which measures how far a concretized method is from the original method along a trajectory of edits 2 . At the higher end of each interval is the closest concretized method (i.e. the original method itself), which yields the widest margin for potential modifications. At the lower end is the furthest concretized method (called threshold method), albeit still underpinned by the seed statements, already exhausted the budget with its accumulated edits to the non-seed statements. In other words, any change that made itself even further from the original method will cause models to alter the prediction they made for the seed. By monotonicity, any program that lies between the two ends of each interval is a valid concretization. Recall the context-free grammar inferred for the illustrative example in Section 2, we avoid constraining the recursion depth of the production rule for the clarity of our presentation. In reality, depth of three gives a good approximation of the lower end of each interval regarding the concretization of the seeds and mutants in Figure 4 and 5. Finally, as a verification mechanism, we confirm the validity of each concretized method by passing them to models for their predicted labels. A concretized method will only be kept when it lead models to the prediction they made for the original program.
Technically, when varying the non-seed statements, we first keep the control flow structure of the original method, and only modifies the non-control statements or expressions in the control construct. Later, when switching to a different control flow structure, we reset the higher end of each interval to be , a program that is the closest to among all that employ the new control flow structure . Formally, we define in Equation 1: where is the original program from which the seed is derived. CF(W) returns the control flow structure of . NCF(W) returns the set of non-control statements of . The reason we consider {∅ }, a set of an empty statement, is to allow control statements with empty bodies when existing non-control statements are running out (e.g. number of control statements is greater than the that of non-control statements). Similarly, the margin induces will be gradually consumed by the changes we apply to . Like before, the enumeration terminates when no longer preserves the prediction models made for the seed, and we deem programs between the two ends of each interval as the concretization of the seeds with different control flow structures. Concretize with Mutants. In the aforementioned approach, seed statements serve as the anchor in the method, and they are kept intact throughout the concretization process. Now we discuss how to handle the situation in which seeds statements are modified. Once a seed is modified, its statements may lose the capability of keeping the original prediction models made even with the facilitation from the non-seed statements in the original method. Figure 9 gives an example, in which both programs altered code2seq's prediction due to the modification on the seed statements despite the presence of all non-seed statements. For such modified seeds which fail to preserve the original prediction models made even when the non-seed statements are present, we don't consider them for subsequent concretization. Because they do not exhibit any budget for further changes that are required to concretize a modified seed according to the monotonicity property. On the other hand, the modified seeds that remain in the mix are precisely the mutants (Definition 3.4), among which the one with the least budget of change is called the weakest mutant. To concretize each mutant into a full-blown method, we follow the same procedure in which we concretize a seed. That is, we inject to the body of a mutant the non-seed statements in the original method. The resultant method, which has the largest room for potential changes, keeps being modified until models no longer preserve the prediction they made for the seeds. Similar to the seed concretization, statements in a mutant will never be changed, because mutants themselves already cover all the valid changes. In addition, we also employ the verification mechanism to ensure every concretization based on the mutants is valid.
3.2.4 Summarize. Finally, we infer a context-free grammar to describe precisely all concretized methods produced in the previous step. We declare this grammar to be the definition of the patterns learned from P w.r.t. the label . To solve this grammar inference problem, we don't reinvent the wheel but to reuse the syntactic grammar that input programs already employ. That is, we extract the production rules that a compiler would have used for parsing each concretized method before combining them into a unified grammar. Formally, we take the union of the terminals, non-terminals, and production rules extracted from each grammar. Regarding the definition of the patterns that code2seq learned for the label saveBitmapToFile, our inferred grammar describes 13 distinct control flow structures and can be instantiated into hundreds of programs with different syntactic structures. In other words, seeds and mutants are capable of preserving the predicted label saveBitmapToFile most of the time when the enumerated programs are similar in size to the original method. Through the grammar we inferred, we conclude that code2seq has not learned a semantic representation for methods named saveBitmapToFile, instead, they memorize the methods through small, syntactic features.
Correctness
We show our "abstract, mutate, concretize, and summarize" algorithm is correct w.r.t. Definition 3.1 (Patterns). In particular, Theorem 3.1 and 3.2 gives the soundness and completeness proof for our algorithm.
Theorem 3.1 (Soundness). Given a model trained from a dataset D, and a grammar that the algorithm inferred as the definition of the patterns learned from D for predicting a label , the algorithm is said to be sound iff does not describe any program for which does not predict the label .
Proof. All concretizations of seeds or mutants are verified through 's predictions. The contextfree grammar that the algorithm inferred describes precisely all the valid concretizations. Therefore, by construction the theorem holds. □ Theorem 3.2 (Completeness). Given a model trained from a dataset D, and a grammar that the algorithm inferred as the definition of the patterns learned from D for predicting a label , the algorithm is said to be complete iff describes any program for which predicts the label .
Proof. Assume otherwise, there exists a program such that predicts the label for and does not describe . Regarding 's properties, one of the following conditions has to be met: (a) is from the training set of . Recall the concretize step in our algorithm, which uses every training program with the predicted label to enumerate the valid concretizations. In other words, all training programs are already included in the concretize step, and will be described by . This contradicts the assumption. Thus, the condition is not met. (b) is not from the training set of .
(1) contains the statements in a seed derived from a training program . Per our assumption that does not describe , and thus is not a valid concretization of , which means is not on any trajectory of edits that changes to a threshold method. By monotonicity, does not predict the label for , hence contradicting the assumption. (2) contains the statements in a mutant derived from a seed. With the same strategy as it is adopted in (1), this condition can also be refuted. (3) does not contain the statements in a seed, neither does it contain the statements in a mutant. As discussed in the concretize step, will not lead to predict the label . The condition is also false.
Since none of the conditions above can be satisfied, it can be inferred that does not exist. □
EXAMPLE USES OF THE PATTERN DEFINITION
In this section, we show the practical implications of our pattern definitions: a new method for evaluating the robustness, and a new technique to improve the accuracy of code summarization models.
Evaluating Robustness of Code Summarization Models
The robustness of a model refers to the reliability of the predictions it makes, especially on adversarial examples, a special type of data created by systematically perturbing the model inputs. Szegedy et al. [2013] is the first to discover the existence of adversarial examples in the image classification domain: visually indistinguishable perturbations cause models to alter their predictions made for the original image. Existing approaches to evaluating the robustness of a model can be classified into two categories: formally verifying a lower bound [Gehr et al. 2018;Raghunathan et al. 2018;Wang et al. 2018] (i.e. predictions are guaranteed to hold on perturbations no greater than w.r.t. some distance metric such as 0 , 1 or ∞ ), or constructing attacks to demonstrate an upper bound [Carlini and Wagner 2017;Goodfellow et al. 2015] (i.e. perturbations no smaller than are sufficient to make models alter their predictions).
Wang and Christodorescu's method falls into the latter, in which they create adversarial examples by applying semantically-preserving transformations to the original programs. Interested readers are encouraged to consult the supplemental material for examples of their transformations. Despite the significant findings, their approach suffers from two issues. First, since there are many applicable transformations, and each transformation can be applied to multiple places in a given method, blindly attempting all the possibilities is quite an inefficient approach. Furthermore, their transformations often cause a fair amount of changes to the original method, thus tending to find loose bounds that do not accurately reflect the robustness of a model.
To address the weakness of their approach, we leverage the pattern definitions to pinpoint adversarial examples that demonstrate a far tighter bound than Wang and Christodorescu's method. Intuitively, since we aim to minimize the number of changes when perturbing the input methods, we only modify the seed statements, the part of the method models heavily attend to, to create adversarial examples. We introduce four semantically-preserving transformations -variable renaming, operands swapping, API substitution, and statements reordering -that only make minor edits to the seed of the input methods. Figure 10 depicts an example for each transformation except variable renaming. For each column, the block at the top holds the original method and that at the bottom holds the transformed method. In comparison with Wang and Christodorescu's method, our transformations make smaller edits to the original programs.
Improving Accuracy of Code Summarization Models
We introduce a new technique based on pattern definitions that improves the prediction accuracy of code summarization models. Our technique is built on an important observation we made about the incorrect predictions that models make. When models mistakenly mis-predict a method which has a label to have a label , more often than not, the seed of the mis-predicted method is similar to the seed of another method with the label , in which case models are incapable of making a clear distinction, resulted in the mis-predictions. Figure 11 gives an example in which the two methods have similar seeds but distinct names. code2seq mis-predicted the method in Figure 11a to be stopAnimation, the name of the method in Figure 11b. Fig. 11. code2seq mis-predicted the name of (a) to be stopAnimation, the name of the method in (b). The reason is the seed statement in (a) is similar to the seed statement in (b). We highlight the seed statements of both programs within the shadow box.
Inspired by this finding, we proposed a new approach, along the line of adversarial training [Goodfellow et al. 2015], which guides models to attend to a different part of an input method when predicting its label. The hope is the new seed will not clash with any existing seed extracted from the training programs in the entire dataset. At the technical level, we create new training programs by injecting the seed of a mis-predicted method into a variety of methods (i.e. with different labels) that are correctly predicted. Each freshly-created program will be assigned with the label of the correctly predicted method which happens to host the seed. Our intention is to neutralize the previous seed, which causes models to mis-predict, to a new seed, which hopefully would lead them to the ground truth. Figure 12 depicts an example, in which we drop the seed statements of the mis-predicted program (Figure 11a) into three methods named contains, count, and indexOf respectively. The resulted methods will keep the name of the hosting methods as highlighted in the Figure. In practice, many such instances will be added to the training set. As a result, models are forced to shift their attention on the mis-predicted program because keep using mRunningAnimator = null as seed will affect their accuracy on the additional training samples despite the higher weight of the hosting methods.
Because seed statements, which always emit strong signals to influence models, will not be ignored, and the only way out is to neutralize them, which is the goal of the re-training. To select the mis-predicted programs, we target labels where models display the highest error rates to prevent them from overfitting to a few outliers for an otherwise perfectly-predicted label. Because a poor prediction accuracy on a sizable number of programs with the same label indicates the issue of underfitting, which our new technique addresses.
EVALUATION
We have realized our algorithm "abstract, mutate, concretize and summarize" in a tool, called PATIC, which formalizes the patterns code summarization models learned using context-free grammar. In the first part of the evaluation, we give the details of the pattern definitions that PATIC produces. For the example applications of the pattern definitions, we also evaluate the effectiveness of the method for finding adversarial examples, and the new technique for improving the prediction accuracy of code summarization models.
Evaluation Subjects
Models. code2vec, code2seq, sequence GNN, LiGER are the most notable code summarization models in the literature. In the ideal case, all of them should be included in our experiments. However, we had significant difficulties in reproducing the results sequence GNN achieves [Fernandes et al. 2019]. Our reimplementation performed significantly worse (i.e. more than 10% in F1) than theirs on the same benchmark used in their experiments. We suspect the discrepancy is caused by the inconsistent extractors which convert a method into the graph representation amendable to the model because the original extractor is the only part in their pipeline that is not open-sourced. 3 Concerning the validity of our results on an inferior reimplementation, we regrettably exclude sequence GNN from our experiment. We also don't pick LiGER, a model that heavily depends on program executions due to its rather limited applicability and generality. Compared to code2vec and code2seq which does not even require programs to compile, LiGER requires programs to execute. For this reason, LiGER is evaluated on less than 10% of the methods in Java-med and Java-large since the vast majority do not trigger interesting executions for LiGER to learn. Datasets. We use Java-small, Java-med, and Java-large, three public datasets that many code summarization models used for evaluation. They are proposed by Alon et al. [2019a], which are collections of Java methods extracted from a large number of projects on GitHub. We have retrained code2vec and code2seq using their implementations open-sourced on GitHub. Table 1 shows re-trained models are comparable to the originals [Alon et al. 2019a].
What Have code2vec and code2seq Learned
Now, we give the details about the pattern definitions PATIC produced for code2vec and code2seq. Finding Seeds. In general, we follow Algorithm 1 to identify the seed of a given program. Regarding the control statements, we treat their bodies to be independent of the control predicates. For example, when abstracting a if statement, we either delete the if condition and keep the statements in the body or remove a non-control statement from its body. We apply this method recursively to deal with nested control constructs. Table 2 depicts the size of the seed in terms of the number of tokens it is composed of for code2vec and code2seq (e.g. mean and median). The number in the parenthesis denotes the percentage a seed's tokens makes up of a whole method's. Similarly, Table 3 gives the statistics of the strongest mutants. Apparently, both models only learned local, syntactic program features as the seeds do not capture the global, semantic properties of input methods. Concretizing Seeds and Mutants. At the concretize step, we adopt the same approach to dealing with the identifiers (e.g. variable, type, or method names) as we do at the mutate step. In general, concretizations of a seed or a mutant covers a wide spectrum of program structures, many of which do not resemble the semantics of the statements outside of the seed. Figure 13 gives two more concretizations in addition to those in Figure 6. Table 4 depicts on the number of distinct control flow structures a concretization includes. Table 5 gives the same statistics w.r.t. the actual program instances in terms of the syntactic variations. Evidently, both models pay little to none attention to the non-seed statements in input methods as they are regularly substituted with other drastically different statements without altering the predictions models made for the original programs. Both code2vec and code2seq predominately base their predictions on a small fraction of the input methods.
Evaluating the Robustness of code2vec and code2seq
We construct attacks to code2vec and code2seq based on the pattern definitions PATIC produces. Given an input method, we identify its seed statements, on which we apply the aforementioned semantically-preserving transformations to look for the potential adversarial examples. If a transformed program leads a model to a different prediction than it made for the original method, an adversarial example is found, and the robustness of the model can be calculated by averaging the distance between the closest adversarial examples and the original methods that are correctly predicted in a test set. We adopt the tree-edit distance (with node swapping operation) as the metric to measure the distance between programs because others like 0 , 1 or ∞ , which typically used in the setting of adversarial learning, are not suitable. Table 6 and 7 depict the robustness score for code2vec and code2seq based on programs that compile, which can be deemed as the first line of defense. Table 8 and 9 show on average how many attempts two methods take to find the closest adversarial examples. Given multiple applicable transformations, we rank them in ascending order of the distance between the resultant program after the transformation is applied and the original method. For transformations that result in programs of same distance, they will be picked randomly. The baseline refers to Wang and Christodorescu's method, which makes far more attempts than our method to find adversarial examples. Methods Java-small Java-med Java-large Baseline 2.39 3.50 3.14 PATIC 1.12 1.68 3.02 We also dive deeper into the robustness scores we obtained, and find that both methods are heavily relying on variable renaming to create adversarial examples. For a fair evaluation of the other program transformations, we exclude variable renaming and repeat the same experiment. Table 10 and 11 report the percentage of programs for which adversarial examples can not be created with the other transformations for code2vec and code2seq respectively. Using the remaining programs on which adversarial examples can be created, we report the robustness score of code2vec and code2seq in Table 12 and 13. Results presented in Table 10-13 suggest that our approach finds adversarial examples not only for far more programs on all datasets but also with significantly smaller edits to the original methods if the variable renaming transformation is not considered. Methods Java-small Java-med Java-large Baseline 72% 68% 64% PATIC 35% 27% 31% Methods Java-small Java-med Java-large Baseline 68% 64% 67% PATIC 28% 29% 33% Both code2vec and code2seq are vulnerable to adversarial examples. Perturbing the seed statements in a given method easily causes code2vec and code2seq to alter their predictions.
Improving the Accuracy of code2vec and code2seq
We propose a new technique to improve the prediction accuracy of code2vec and code2seq by guiding them to correct their own mis-predictions. To prepare code2vec and code2seq for the re-training, Table 14 and 15 give the number of labels under which we pick the mis-predicted programs to fix, the average error rate that the model displays on these labels, and the number of generated programs for re-training. Table 16 shows the results of code2vec and code2seq on all three datasets after the re-training. Clearly, Our technique does not just overfit the models to their training set as their accuracy on the test set has also been consistently improved, especially code2seq which now achieves the state-ofthe-art results on Java-med and Java-large. On the other hand, we acknowledge the improvement is not substantial, nevertheless, we believe the technique is still a significant contribution from the following aspects.
• Reliability: All results presented in Table 16 are the average over ten separate training instances except those based on code2seq with Java-large, which take more than a week to train, is the average over five. Therefore, the higher accuracy that both models show is not random noise but a reliable improvement. • Simplicity: As a major selling point, our technique is in nature a data augmentation approach, which does not require one to change the architecture of an existing model. By systemically augmenting the training set based on the previous mis-predictions, the improvement comes with a much lower cost than designing a new model. • Effectiveness: The baseline in Table 16 augments the training set with the same amount of additional data that are randomly selected from GitHub. The added data have the same label as the generated programs used for re-training. As shown in the table, randomly augmenting the training set does not always lead models to an improved accuracy as both models display almost the same performance as before. In addition, we give concrete evidence on the effect the re-training makes on code2seq. Given the mis-predicted program (Figure 11a), code2seq corrects its own mistake by expanding the seed statements (highlighted in Figure 14a), which enables him to differentiate the method in Figure 14a from that in Figure 14b. The augmentation approach improves the accuracy of both code2vec and code2seq by guiding them to correct their own mis-predictions, in particular, code2seq achieves the state-of-the-art results on Java-med and Java-large on top of its existing architecture.
Discussion on Threats to Validity
A major threat to the validity of our approach is the assumption we made regarding the monotonicity property of code summarization models. Intuitively, given how machine learning models are trained to fit the training data, monotonicity is a reasonable assumption. In addition, this behavior is certainly backed up by our extensive experiments as no violation has been discovered. Nevertheless, we have not given a principle proof of the property which may not hold for the evaluated models. But even the monotonicity property is proven to be false, we believe the validity of our approach still holds to a great extent.
First of all, the soundness of the approach is not affected since the validity of seeds, mutants, and concretizations are all verified through predictions made by the subject models, which means our primary findings -the patterns that code2vec and code2seq learned for predicting method names -remain to be valid, as a result, our secondary contribution regarding the applications of the pattern definitions -a new method for evaluating the robustness and a new technique for improving the accuracy of code2vec and code2seq -are also valid. The aspect that will be affected is the completeness of our approach, that is, the inferred grammar that defines what models learned for a label could very well miss programs for which models also predict the label. However, given the maintenance of soundness and its implications, we conclude the validity of our approach mostly holds without the monotonicity property.
RELATED WORK
In this section, we survey two strands of related work: studying what models have learned and predicting the names of methods given their bodies.
Patterns Models Have Learned
In computer vision, Han and Zhu [2008] propose an attribute graph grammar for parsing images with man-made objects, such as buildings, hallways, and kitchens. Their algorithm focuses on detecting exclusively the shape of rectangle, and uses six production rules to specify the various spatial relationships among the detected rectangles as the basis for object detection. Later, a similar grammar is applied to the setting of cloth modeling, an important task in human recognition and tracking, to model the wide variations of cloth configurations [Chen et al. 2006]. A particular strength of their approach is the ability to provide insights into what patterns the algorithm recognized based on the activation of the production rules during inference.
Zeiler and Fergus [2014] propose a visualization technique to demystify the function of intermediate feature layers and the operation of convolutional neural networks. Built upon a deconvolutional network [Zeiler et al. 2011], their technique reveals the input stimuli that excite individual feature maps at any layer in the model. It also allows one to observe the evolution of features during training and to diagnose potential problems with the model. As another contribution of the technique, they can reveal which parts of the input images are important for classification. Bau et al. [2019] present an analytic framework to visualize and understand generative adversarial networks. First, they identify the units in a layer whose featuremaps correspond to the detection of a class of objects (e.g. trees). Second, they intervene within the network to switch off (or back on) the detection of the class of objects, (e.g. forcing the activation of the identified units to be zero), and quantify the average causal effect of the ablation. Finally, they examine the contextual relationship between these causal object units and the background.
Code Summarization Models
code2vec is the first code summarization model in a cross-projecting setting. It works by (1) decomposing the Abstract Syntax Tree (AST) of an input method into a collection of AST paths, each of which is a path between nodes in the AST, starting from one terminal, ending in another terminal, and passing through the common ancestor of both terminals; (2) aggregating the embedding learned for each AST path; and (3) predicting a probability distribution over a set of given labels based on the aggregated embedding. code2vec also employs the attention mechanism [Bahdanau et al. 2015;Vaswani et al. 2017] to assign different weights for AST paths. In other words, the embedding of the method is a weighted sum of the embedding of a path in the AST.
code2seq is another notable code summarization model in the literature. Unlike code2vec, a discriminative model in nature, code2seq adopts an encoder-decoder architecture [Cho et al. 2014;Devlin et al. 2014] to predict method names as sequences of words. For decoding, code2seq also attends to a set of representations designed to combine each path and token representation to generate the method names. code2seq achieved then the state-of-the-art results on all Java-small, Java-med, and Java-large, three public datasets of Java methods.
In a similar vein to code2seq, sequence GNN also adopts an encoder-decoder architecture to predict method names. To encode a given method, sequence GNN relies on the coordination between a sequence and graph model. That is, recurrent neural network first learns the sequence representation of each token in a program before gated graph neural network computes the state for every node in the AST. Like code2seq, they use another recurrent neural network to generate method names as sequences of words.
LiGER is the first model that incorporates dynamic program features to the learning processing for code summarization. Their insight is that executions, which offer direct, precise, and canonicalized representations of the program behavior, help models to generalize beyond syntactic features. On the other hand, LiGER uses symbolic features learned from source code to reduce the heavy reliance a dynamic model has on program executions since high-coverage executions are not always readily available.
CONCLUSION
In this paper, we present the first formal definition of the patterns that code summarization models have learned. Based on this definition, we have developed a sound algorithm for producing such pattern definitions, and a working implementation for formalizing the patterns code2vec and code2seq have learned. We found that both code2vec and code2seq heavily focus on a small, local fraction of input methods to predict their names, indicating the limited generalizability of both models about global, semantic program properties. We also present two example applications of the pattern definitions. For evaluating the robustness of code2vec and code2seq, our method uses smaller perturbations and takes far fewer attempts than prior approaches to find adversarial examples. Regarding improving the accuracy of code2vec and code2seq, the new technique we designed based on adversarial training enables code2seq to achieve the state-of-the-art results on Java-med and Java-large.
• We plan to further refine and deploy our implementation to inform the users of a finer-grained explanation for a predicted label such as which expression in the method is responsible for a particular word in the output so that users can better understand the reasoning models use when making predictions. • We plan to further diagnose the code summarization models to design a general, systematic framework for improving both the accuracy and robustness of code summarization models. • We plan to apply our technique to study the patterns that are learned by other models of code for solving different tasks in programming language. | 2023-03-07T06:42:11.116Z | 2023-03-04T00:00:00.000 | {
"year": 2023,
"sha1": "d743563d44d18f436db1738086375bf5435e0f29",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d743563d44d18f436db1738086375bf5435e0f29",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.